( 8 Jun 94)
                    *                                    *
                    * Section 5 - Programmer's Reference *
                    *                                    *
              This section describes features of the GAMESS
          implementation which are true for all machines.  See the
          section 'hardware specifics' for information on each
          machine type.  The contents of this section are:
                    o  Installation overview (sequential mode)
                    o  Files on the distribution tape
                    o  Names of source code modules
                    o  Programming conventions
                    o  Parallel version of GAMESS
                             TCGMSG toolkit
                             installation process
                             execution process
                             load balancing
                             timing examples
                             broadcast identifiers
                    o  Disk files used by GAMESS
                    o  Contents of DICTNRY master file

                          Installation overview
              GAMESS will run on a number of different machines
          under FORTRAN 77 compilers.  However, even given the F77
          standard there are still a number of differences between
          various machines.  For example some machines have 32 bit
          word lengths, requiring the use of double precision, while
          others have 64 bit words and are used in single precision.
              Although there are many types of computers, there is
          only one (1) version of GAMESS.
              This portability is made possible mainly by keeping
          machine dependencies to a minimum (that is, writing in
          F77, not vendor specific language extensions).  The
          unavoidable few statements which do depend on the hardware
          are commented out, for example, with "*IBM" in columns
          1-4.  Before compiling GAMESS on an IBM machine, these
          four columns must be replaced by 4 blanks.  The process of
          turning on a particular machine's specialized code is
          dubbed "activation".
              A semi-portable FORTRAN 77 program to activate the
          desired machine dependent lines is supplied with the
          GAMESS package as program ACTVTE.  Before compiling ACTVTE
          on your machine, use your text editor to activate the very
          few machine dependent lines in ACTVTE before compiling it.
          Be careful not to change the DATA initialization!
              The task of building an executable form of GAMESS is
                    activate     compile        link
                *.SRC --->  *.FOR  --->  *.OBJ  ---> *.EXE
                source     FORTRAN       object    executable
                 code        code         code       image
          where the intermediate files *.FOR and *.OBJ are discarded
          once the executable has been linked.  It may seem odd at
          first to delete FORTRAN code, but this can always be
          reconstructed from the master source code using ACTVTE.
              The advantage of maintaining only one master version
          is obvious.  Whenever any improvements are made, they are
          automatically in place for all the currently supported
          machines.  There is no need to make the same changes in a
          plethora of other versions.
              The control language needed to activate, compile, and
          link GAMESS on your brand of computer is probably present
          on the distribution tape.  These files should not be used
          without some examination and thought on your part, but
          should give you a starting point.

              There may be some control language procedures for one
          computer that cannot be duplicated on another.  However,
          some general comments apply:  Files named COMP will
          compile a single module.  COMPALL will compile all
          modules.  LKED will link together an executable image.
          RUNGMS will run a GAMESS job, and RUNALL will run all the
          example jobs.
              The first step in installing GAMESS should be to print
          the manual.  If you are reading this, you've got that
          done!  The second step would be to get the source code
          activator compiled and linked (note that the activator
          must be activated manually before it is compiled).  Third,
          you should now compile all the source modules (if you have
          an IBM, you should also assemble the two provided files).
          Fourth, link the program.  Finally, run all the short
          tests, and very carefully compare the key results shown in
          the 'sample input' section against your outputs.  These
          "correct" results are from a VAX, so there may be very
          tiny (last digit) precision differences for other
          machines.  That's it!
              Before starting the installation, you should read the
          pages decribing your computer in the 'Hardware Specifics'
          section of the manual.  There may be special instructions
          for your machine.

                            Files for GAMESS
             *.DOC            The files you are reading now. You
                              should print these on 8.5 by 11 inch
                              white paper, using column one as
                              carriage control.  Double sided, 3
                              hole, 10 pitch laser output is best!
             *.SRC            source code for each module
             *.ASM            IBM mainframe assembler source
             *.C              C code used by some UNIX systems.
             EXAM*.INP        21 short test jobs (see TESTS.DOC).
             BENCH*.INP       13 longer test jobs.
              These are files related to some utility programs:
             ACTVTE.CODE      Source code activator.  Note that you
                              must use a text editor to MANUALLY
                              activate this program before using it.
             MBLDR.*          model builder (internal to Cartesian)
             CARTIC.*         Cartesian to internal coordinates
             CLENMO.*         cleans up $VEC groups
              There are files related to X windows graphics.
          See the file INTRO.MAN for their names.
              The remaining files are command language for the
          various machines.
             *.COM   VAX command language.  PROBE is especially
                     useful for persons learning GAMESS.
             *.MVS   IBM command language for MVS (dreaded JCL).
             *.CMS   IBM command language for CMS.  These should
                     be copied to filetype EXEC.
             *.CSH   UNIX C shell command language.  These should
                     have the "extension" omitted, and have their
                     mode changed to executable.

                      Names of source code modules
               The source code for GAMESS is divided into a number
          of sections, called modules, each of which does related
          things, and is a handy size to edit.  The following is a
          list of the different modules, what they do, and notes on
          their machine dependencies.
          module   description                         dependency
          -------  -------------------------           ----------
          BASECP   SBK and HW valence basis sets
          BASEXT   DH, MC, 6-311G extended basis sets
          BASHUZ   Huzinaga MINI/MIDI basis sets to Xe
          BASHZ2   Huzinaga MINI/MIDI basis sets Cs-Rn
          BASN21   N-21G basis sets
          BASN31   N-31G basis sets
          BASSTO   STO-NG basis sets
          BLAS     level 1 basic linear algebra subprograms
          CPHF     coupled perturbed Hartree-Fock          1
          CPROHF   open shell/TCSCF CPHF                   1
          ECP      pseudopotential integrals
          ECPHW    Hay/Wadt effective core potentials
          ECPLIB   initialization code for ECP
          ECPSBK   Stevens/Basch/Krauss/Jasien/Cundari ECPs
          EIGEN    Givens-Householder, Jacobi diagonalization
          EFDMY    dummy source file
          FFIELD   finite field polarizabilities
          FRFMT    free format input scanner
          GAMESS   main program, single point energy
                   and energy gradient drivers, misc.
          GRD1     one electron gradient integrals
          GRD2A    two electron gradient integrals         1
          GRD2B     "     "        "         "
          GRD2C     "     "        "         "
          GRD2D     "     "        "         "
          GRD2E     "     "        "         "
          GRD2F     "     "        "         "
          GUESS    initial orbital guess
          GUGDGA   Davidson CI diagonalization             1
          GUGDGB       "    "        "                     1
          GUGDM    1 particle density matrix
          GUGDM2   2 particle density matrix               1
          GUGDRT   distinct row table generation
          GUGEM    GUGA method energy matrix formation     1
          GUGSRT   sort transformed integrals              1
          GVB      generalized valence bond HF-SCF         1

          module   description                         dependency
          -------  -------------------------           ----------
          HESS     hessian computation drivers
          HSS1A    one electron hessian integrals
          HSS1B     "     "        "        "
          HSS2A    two electron hessian integrals          1
          HSS2B     "     "        "        "
          INPUTA   read geometry, basis, symmetry, etc.
          INPUTB    "     "        "       "
          INPUTC    "     "        "       "
          INT1     one electron integrals
          INT2A    two electron integrals                  1
          INT2B     "     "        "
          INT2C    roots for Rys polynomials
          IOLIB    input/output routines,etc.              2
          LAGRAN   CI Lagrangian matrix                    1
          LOCAL    various localization methods            1
          MCSCF    second order MCSCF calculation          1
          MCTWO    two electron terms for MCSCF            1
          MP2      2nd order Moller-Plesset                1
          MPCDAT   MOPAC parameterization
          MPCGRD   MOPAC gradient
          MPCINT   MOPAC integrals
          MPCMOL   MOPAC molecule setup
          MPCMSC   miscellaneous MOPAC routines
          MTHLIB   printout, matrix math utilities         1
          NAMEIO   namelist I/O simulator
          ORDINT   sort atomic integrals                   1
          PARLEY   communicate to other programs
          PRPEL    electrostatic properties
          PRPLIB   miscellaneous properties
          PRPPOP   population properties
          RHFUHF   RHF, UHF, and ROHF HF-SCF               1
          RXNCRD   intrinsic reaction coordinate
          SCFLIB   HF-SCF utility routines, DIIS code
          SCRF     self consistent reaction field
          SPNORB   1 e- spin-orbit coupling terms
          STATPT   geometry and transition state finder
          STUB     small version dummy routines 
          SYMORB   orbital symmetry assignment
          SYMSLC      "        "         "                 1
          TCGSTB   stub routines to link a serial GAMESS
          TRANS    partial integral transformation         1
          TRFDM2   backtransform 2 e- density matrix       1
          TRNSTN   CI transition moments
          TRUDGE   nongradient optimization

          module   description                         dependency
          -------  -------------------------           ----------
          UNPORT   unportable, nasty code            3,4,5,6,7,8
          VECTOR   vectorized version routines             9
          VIBANL   normal coordinate analysis
          ZMATRX   internal coordinates

          Ordinarily, you will not use STUB.SRC, which is linked
          only if your system has a very small amount of physical

          In addition, the IBM mainframe version uses the following
          assembler language routines:  ZDATE.ASM, ZTIME.ASM.
          UNIX versions may use the C code:  ZMIPS.C, ZUNIX.C.
              The machine dependencies noted above are:
          1) packing/unpacking           2) OPEN/CLOSE statments
          3) machine specification       4) fix total dynamic memory
          5) subroutine walkback         6) error handling calls
          7) timing calls                8) LOGAND function
          9) vector library calls

                          Programming Conventions
                   The following "rules" should be adhered
                   to in making changes in GAMESS.  These
                   rules are important in maintaining
                   portability, and should be strictly
                   adhered to.
              Rule 1.  If there is a way to do it that works on all
          computers, do it that way.  Commenting out statements for
          the different types of computers should be your last
          resort.  If it is necessary to add lines specific to your
          Even if you don't have access to all the types of
          supported hardware, you can look at the other machine
          specific examples found in GAMESS, or ask for help from
          someone who does understand the various machines.  If a
          module does not already contain some machine specific
          statements (see the above list) be especially reluctant to
          introduce dependencies.
              Rule 2.  a) Use IMPLICIT DOUBLE PRECISION(A-H,O-Z)
          specification statements throughout.  b) All floating
          point constants should be entered as if they were in
          double precision.  The constants should contain a decimal
          point and a signed two digit exponent.  A legal constant
          is 1.234D-02.  Illegal examples are 1D+00, 5.0E+00, and
          3.0D-2.  c) Double precision BLAS names are used
          throughout, for example DDOT instead of SDOT.
                   The source code activator ACTVTE will
                   automatically convert these double
                   precision constructs into the correct
                   single precision expressions for machines
                   that have 64 rather than 32 bit words.
              Rule 3.  FORTRAN 77 allows the use of generic
          functions.  Thus the routine SQRT should be used in place
          of DSQRT, as this will automatically be given the correct
          precision by the compilers.  Use ABS, COS, INT, etc.  Your
          compiler manual will tell you all the generic names.
              Rule 4.  Every routine in GAMESS begins with a card
          containing the name of the module and the routine.  An
          example is "C*MODULE xxxxxx  *DECK yyyyyy".  The second
          star is in column 18.  Here, xxxxxx is the name of the
          module, and yyyyyy is the name of the routine.
          Furthermore, the individual decks yyyyyy are stored in
          alphabetical order.  This rule is designed to make it
          easier for a person completely unfamiliar with GAMESS to
          find routines.  The trade off for this is that the driver
          for a particular module is often found somewhere in the
          middle of that module.

              Rule 5.  Whenever a change is made to a module, this
          should be recorded at the top of the module.  The
          information required is the date, initials of the person
          making the change, and a terse summary of the change.
              Rule 6.  No lower case characters, no more than 6
          letter variable names, no imbedded tabs, statements must
          lie between columns 7 and 72, etc.  In other words, old
          style syntax.
                                 * * *
                   The next few "rules" are not adhered to
                   in all sections of GAMESS.  Nonetheless
                   they should be followed as much as
                   possible, whether you are writing new
                   code, or modifying an old section.
              Rule 7.  Stick to the FORTRAN naming convention for
          integer (I-N) and floating point variables (A-H,O-Z).  If
          you've ever worked with a program that didn't obey this,
          you'll understand why.
              Rule 8.  Always use a dynamic memory allocation
          routine that calls the real routine.  A good name for the
          memory routine is to replace the last letter of the real
          routine with the letter M for memory.
              Rule 9.  All the usual good programming techniques,
          such as indented DO loops ending on CONTINUEs,
          IF-THEN-ELSE where this is clearer, 3 digit statement
          labels in ascending order, no three branch GO TO's,
          descriptive variable names, 4 digit FORMATs, etc, etc.
                   The next set of rules relates to coding
                   practices which are necessary for the
                   parallel version of GAMESS to function
                   sensibly.  They must be followed without
              Rule 10.  All open, rewind, and close operations on
          sequential files must be performed with the subroutines
          SEQOPN, SEQREW, and SEQCLO respectively.  You can find
          these routines in IOLIB, they are easy to use.

              Rule 11.  All READ and WRITE statements for the
          formatted files 5, 6, 7 (variables IR, IW, IP, or named
          files INPUT, OUTPUT, PUNCH) must be performed only by the
          master task.  Therefore, these statements must be enclosed
          in "IF (MASWRK) THEN" clauses.  The MASWRK variable is
          found in the /PAR/ common block, and is true on the master
          process only.  This avoids duplicate output from slave
          processes.  At the present time, all other disk files in
          GAMESS also obey this rule.
              Rule 12.  All error termination is done by means of
          "CALL ABRT" rather than a STOP statement.  Since this
          subroutine never returns, it is OK to follow it with a
          STOP statement, as compilers may not be happy without a
          STOP as the final executable statment in a routine.

                        Parallel version of GAMESS
              Under the auspices of a joint ARPA and Air Force
          project, GAMESS has begun the arduous journey toward
          parallelization.  Currently, nearly all ab initio 
          methods run in parallel, although many of these still
          have a step or two running sequentially only.  Only MP2
          for UHF/ROHF has no parallel method coded.  In addition,
          MOPAC runs can be run on one node only.  More information
          about the parallel implementation is given below, after the 
          directions for installation and execution.

              If a parallel linked version of GAMESS is run on only
          one node, it behaves as if it is a sequential version, and
          the full functionality of the program is available to you.
                                  * * *
              The two major philosophies for distributed memory MIMD 
          (multiple instruction on multiple data) parallel programs

            1) Have a master program running on one node do all of
               the work, except that smaller slave programs running
               on the other nodes are called to do their part of the 
               compute intensive tasks, or 
            2) Have a single program duplicate all work except for 
               compute intensive code where each node does only its
               separate piece of the work (SPMD, which means single
               program, multiple data).
              We have chosen to implement the SPMD philosophy in
          GAMESS for several reasons.  The first of these is that
          only one code is required (not master and slave codes).
          Therefore, two separate GAMESS codes do not need to be
          maintained.  The second reason is also related to
          maintainance.  GAMESS is constantly evolving as new code
          is incorporated into it.  The parallel calls are "hidden"
          at the lowest possible subroutine levels to allow
          programmers to add their code with a minimum of extra
          effort to parallelize it.  Therefore, new algorithms or
          methods are available to all nodes.  The final reason 
          given here is that duplication of computation generally 
          cuts down on communication.  
              The only portion of the master/slave concept to
          survive in GAMESS is that the first process (node 0)
          handles reading all input lines and does all print out
          and PUNCH file output, as well as all I/O to the DICTNRY 
          master file.  In this sense node 0 is a "master".  A 
          reminder here to all programmers:  you should STRICTLY 
          obey the rules for programming laid out in the Programming 
          Conventions Section of this manual; especially the ones 
          involving MASWRK in I/O statements!

                                  * * *
              Several tools are available for parallelization of
          codes.  We have chosen to use the parallelization tool
          TCGMSG from Robert Harrison, now at Pacific Northwest
          Laboratory.  This message passing toolkit has been ported
          to many UNIX machines and was written specifically for 
          computational chemistry.  It works on distributed memory 
          MIMD systems, on Ethernetworks of ordinary workstations, 
          and on shared memory parallel computers.  Thus TCGMSG 
          allows one to run parallel GAMESS on a fairly wide 
          assortment of hardware.
              Be sure to note that TCGMSG does support communication
          between Ethernet workstations of different brands and/or
          speeds.  For example, we have been able to run on a 3 node
          parallel system built from a DECstation, a SGI Iris, and a
          IBM RS/6000! (see XDR in $SYSTEM.)  It is also useful to 
          note that your Ethernet parallel system does not have to 
          contain a power of two number of nodes.
              TCGMSG uses the best interprocess communication
          available on the hardware being used.  For a Ethernetwork
          of workstations, this means that TCP/IP sockets are used
          for communication.  In turn, this means it is extremely
          unlikely you will be able to include non-Unix systems in a
          Ethernet parallel system.
                                  * * *
              If you are trying to run on a genuine parallel system
          on which TCGMSG does not work, you may still be in luck.
          The "stubs" TCGSTB.SRC can be used to translate from the
          TCGSMG calls sprinkled throughout GAMESS to some other
          message passing language.  For example, we are able to
          use GAMESS on the IBM SP1, Intel Paragon, and Thinking 
          Machines CM-5 in this way, so there is no need to install
          TCGMSG to run GAMESS on these systems.

                                  * * *
             Our experience with parallel GAMESS is that it is quite
          robust in production runs.  In other words, most of the
          grief comes during the installation phase!  TCGMSG will 
          install and execute without any special priviledges.
              The first step in getting GAMESS to run in parallel is
          to link GAMESS in sequential mode, against the object file
          from TCGSTB.SRC, and ensure that the program is working
          correctly in sequential mode.

              Next, obtain a copy of the TCGMSG toolkit.  This is
          available by anonymous ftp from ftp.tcg.anl.gov.  Go to
          the directory /pub/tcgmsg and, using binary mode, transfer
          the file tcgmsg.4.04.tar.Z. (or higher version)

             Unpack this file with 'uncompress' and 'tar -xvf'.
          The only modification we make to TCGMSG before compiling
          it is to remove all -DEVENTLOG flags from the prototype 
          file tcgmsg/ipcv4.0/Makefile.proto.  Then, use the makefile 
          provided to build the TCGMSG library
               chdir ~/tcgmsg
               make all MACHINE=IBMNOEXT
          If your machine is not a IBM RS/6000, substitute the name
          of your machine instead.  At this point you should try the 
          simple "hello" example,
               chdir ipcv4.0
               parallel hello
          to make sure TCGMSG is working.  
              Finally, link GAMESS against the libtcgmsg.a library
          instead of tcgstb.o to produce a parallel executable for
          GAMESS.  It is not necessary to recompile to accomplish
          this.  Instead just change the 'lked' script and relink.
                                  * * *
             Execute GAMESS by modifying the 'pargms' script to
          invoke the TCGMSG program 'parallel', according to the
          directions within that script.  You also must create a
          'gamess.p' file in your home directory,such as
              # user, host, nproc, executable, workdir
              theresa  si.fi.ameslab.gov 1
                     /u/theresa/gamess/gamess.01.x /scr/theresa
              windus   ge.fi.ameslab.gov 1
                     /u/windus/gamess/gamess.01.x /wrk/windus
              The fields in each line are:  username on that
          machine, hostname of that machine, number of processes to
          be run on that machine, full file name of the GAMESS
          executable on that machine, and working directory on that
          machine.  Comments begin with a # character.  Although
          TCGMSG allows long lines to continue on to a new line,
          as shown above, you should not do this.  The execution
          script provided with GAMESS will automatically delete
          work files established in the temporary directories, but
          only if this script gives all host info on a single line.
          A detailed explanation of each field follows:

              The first hostname given must be the name of the
          machine on which you run the 'pargms' script.  This
          script defines environment variables specifying the
          location of the input and output files.  The environment
          is not passed to other nodes by TCGMSG's "parallel"
          program, so the master process (node 0) running "pargms"
          **must** be the first line of your gamess.p file.

             The hostname may need to be the shortened form, rather
          than the full dotted name, especially on SGI and Unicos.
          In general, the correct choice is whatever the response
          to executing the command "hostname" is.
              The processes on other workstations are generated
          by use of the Unix 'rsh' command.  This means that you
          must set up a .rhosts file in your home directory on each
          node on which you intend to run in parallel.  This file
          validates login permission between the various machines,
          by mapping your accounts on one machine onto the others.
          For example, the following .rhosts file might be in
          Theresa's home directory on both systems,
               si.fi.ameslab.gov theresa
               ge.fi.ameslab.gov windus
          You can test this by getting 'rsh' to work, by a command
          such as this, (from si.fi.ameslab.gov)
               rsh ge.fi.ameslab.gov -l windus 'df'
          and then also try it in the reverse direction too.
              Note that the number of processes to be started on a
          given machine is ordinarily one.  The only exception
          is if you are running on a multiCPU box, with a common
          memory.  In this case, gamess.p should contain just one
          line, starting n processes on your n CPUs.  This will
          used shared memory communications rather than sockets to
          pass messages, and is more efficient.

              The executable file does not have to be duplicated
          on every node, although as shown in the example it can
          be.  If you have a homogenous Ethernet system, and there
          is a single file server, the executable can be stored
          on this server to be read by the other nodes by NFS.
          Of course, if you have a heterogenous network, you must
          build a separate executable for each different brand of 
          computer you have.
              At present GAMESS may write various binary files to the
          working directory, depending on what kind of run you are
          doing.   In fact, the only type of run which will not open
          files on the other nodes is a direct SCF, non-analytic
          hessian job.  Any core dump your job might produce will end
          up in this work directory as well.
                                  * * *
              We have provided you with a script named 'seqgms'
          which will run a parallel-linked version of GAMESS using
          only one process on your current machine.  Seqgms will
          automatically build a single node .p file.  Using this
          script means you need to keep only a parallel-linked
          GAMESS executable, and yet you still retain access to the
          parts of GAMESS that do not yet run in parallel.

                                  * * *

              We turn now to a description of the way each major
          parallel section of GAMESS is implemented, and give 
          some suggestions for efficient usage.
                                  * * *

              The HF wavefunctions can be evaluated in parallel
          using either conventional disk storage of the integrals,
          or via direct recomputation of the integrals.  Assuming
          the I/O speed of your system is good, direct SCF is
          *always* slower than disk storage.  But, a direct SCF
          might be faster if your nodes access disk via NFS 
          over the Ethernet, or if you are running on a Intel
          or CM-5 machine.  But, if you are running on Ethernetted
          workstations which have large local disks on each one,
          then conventional disk based SCF is probably fastest.
              When you run a disk based SCF in parallel, the
          integral files are opened in the work directory which
          you defined in your gamess.p file.  Only the subset
          of integrals computed by each node are stored on that
          node's disk space.  This lets you store integral files
          (in pieces) that are larger than will fit on any one
          of your computers.
              You may wish to experiment with both options, so
          that you learn which is fastest on your hardware setup.
                                  * * *
              One of the most important issues in parallelization is
          load balancing.  Currently, GAMESS has two algorithms
          available for load balancing of the two-electron integrals
          and gradients.  The first is a simple inner loop algorithm
          (BALTYP=LOOP).  The work of the inner most loop is split
          up so the next processor works on the next loop occurence.
          If all processors are of the same speed and none of the
          processors is dedicated to other work (for example, an
          Intel), this is the most effective load balance technique.
              The second method is designed for systems where the
          processor speeds may not be the same, or where some of the
          processors are doing other work (such as a system of equal
          workstations in which one of them might be doing other
          things).  In this technique, as soon as a processor 
          finishes its previous task, it takes the next task on the
          list of work to be done.  Thus, a faster node can take
          more of the work, allowing all nodes to finish the run at
          the same time.  This method is implemented throughout most
          of GAMESS (see BALTYP=NXTVAL in $SYSTEM).  It requires
          some extra work coordinating which tasks have been done
          already, so NXTVAL adds a communication penalty of about
          5% to the total run time.

              All integral sections (meaning the ordinary integrals, 
          gradient integrals, and hessian integrals) have both LOOP
          and NXTVAL balancing implemented.  Thus all of a HF level
          run involving only the energy and gradient has both load
          balancing techniques.  Analytic HF hessians also have both
          balancing techniques for the integral transformation step.
              The parallel CI/MCSCF program also contains both 
          balancing algorithms, except that for technical reasons 
          MCSCF gradient computation will internally switch to the
          LOOP balancing method for that step only.  

              The parallel MP2 program uses only LOOP balancing
          during the MP2, although it will use either balancing
          method during the preliminary SCF.

              The IBM SP1, Intel, CM-5 always use LOOP balancing, 
          ignoring your input BALTYP.
                                  * * *
              You can find performance numbers for conventional and
          direct SCF, as well as gradient evaluation in the paper
          M.W.Schmidt, et al., J.Comput.Chem. 14, 1347-1363(1993).

              Data for the MCSCF program is not yet published, so
          we will include one example here.  The data are from
          IBM RS/6000 model 350s, connected by Ethernet, using 
          LOOP balancing.  CPU times to do one iteration are in 
             # CPUs=       1       2       3       4       5
                           -       -       -       -       -
             MO guess     3.8     4.1     5.3     5.2     5.1
             AO ints    391.9   392.0   391.5   391.0   391.0
             DRT          0.5     0.5     0.6     0.6     0.6
             transf    1539.1   764.5   616.2   461.0   304.7
             CI           1.4     0.9     0.8     0.7     0.6
             DM2          0.1     0.1     0.2     0.2     0.2
             Lag+Hess    16.6    20.6    26.8    26.3    25.3
             NR          25.2    27.0    25.4    25.4    25.4

              The first three steps precede the MCSCF iterations
          and are not parallel.  The integral transformation, and
          generation of the CI Hamiltonian and its diagonalization
          are reasonably parallel (the above example has a trivial
          number of CSFs).  Large global sums overwhelm the
          parallel construction of the Lagrangian and orbital
          Hessian.  At present the Newton-Raphson orbital change
          is running sequentially.  Not shown: The backtransform
          of the DM2 to the AO basis to set up gradient calculation 
          runs close to sequentially, but the gradient integral
          computation is perfectly parallel.

              A CI/MCSCF job will open several disk files on each
          node.  For example, if the integral transformation
          is not being run in direct mode (see DIRTRF in $TRANS)
          then each node will compute and store a full copy of the
          AO integrals.  Each node will store a subset of the
          transformed integrals, the CI Hamiltonian, and the density 
          matrix.  The algorithm thus demands not only disk storage
          on each node, but also a reasonable I/O bandwidth.  We
          have not had the opportunity to run the MCSCF code on an
          Intel Paragon or a Thinking Machines CM-5, which are not
          known for their I/O capacity.  Similar comments apply to
          analytic hessians: you must have disk storage on each
          node, reachable at a reasonable bandwidth.

              The integral transformation just described is also
          used to drive both parallel analytic hessians and energy 
          localizations.  Thus the scalability of parallel hessians
          is much better than described by T.L.Windus, M.W.Schmidt,
          M.S.Gordon, Chem.Phys.Lett., 216, 375-379(1993), in that
          all steps but the coupled Hartree-Fock are now parallel.
              The closed shell parallel MP2 computation is adapted
          from Michel Dupuis' implementation.  In contrast to the
          usual transformation, the specialized MP2 transformation
          has the AO integrals actually distributed over each node,
          instead of being replicated on each.  Obviously this uses
          much less disk storage!  However, since each node needs to 
          work with the full AO integral list, the subset stored on
          each node must be broadcast to all other nodes.  Thus the
          savings in disk storage comes at the expense of substantial
          extra communication.  Initial tests show that the extra
          communications prevent the MP2 code from scaling very well
          when Ethernet (a rather slow communication channel) is used
          to tie together workstations.  We do not yet have any 
          information about the code's performance on a machine such
          as the IBM SP1 with fast communications.

              To summarize, the normal transformation (used by CI,
          MCSCF, energy localization, analytic hessians) must store
          duplicate lists of AO integrals on each node, but has almost
          no internode communication, and thus scales well on even
          low speed networks.  The MP2 transformation stores only a
          subset of AO integrals on each node, but requires high speed
          communications to send each integral to all nodes.  Time
          will tell us which method is wiser.  Note that both of the
          transformations distribute the memory needs, which can be
          substantial on one node, over all the nodes.  Both methods
          must evaluate the full AO integral list if a direct integral
          transformation is done, during each pass.  Being able to
          distribute the memory (i.e. the passes) over each nodes means
          that direct transformations make more sense in parallel than
          in sequential mode.  In fact, each node may only need to 
          compute the AO integrals once, instead of many times if the
          job is run on one node.  Note that both direct transforms
          compute the full AO integral list on each node during each
          pass, a blatant sequential bottleneck.

              All types of ab initio runs (except for UHF/ROHF MP2 
          energies) should now run in parallel.  However, only the
          code for HF energies and gradients is mature, so several
          sequential bottlenecks remain.  The following steps of a
          parallel run will be conducted sequentially by the master:
             MCSCF: solution of Newton-Raphson equations
             analytic hessians: the coupled Hartree-Fock
             energy localizations: the orbital localization step
             transition moments/spin-orbit: the final property step
          However, all other steps (such as the evaluation of the 
          underlying wavefunction) do speed up in parallel.  Other
          steps which do not scale well, although they do speed up
          slightly are:
             HF: solution of SCF equations
             MCSCF/CI: Hamiltonian and 2 body density generation
             MCSCF: 2 body density back transformation
          Future versions of GAMESS will address these bottlenecks.  
          In the meantime, some trial and error will teach you how
          many nodes can effectively contribute to any particular 
          type of run.

             One example, using the same RS/6000-350 machines and
          molecule (bench12.inp converted to runtyp=hessian, with
          2,000,000 words, with baltyp=loop) gives the following
          replacement for Table 1 of the Chem.Phys.Lett. 216, 
          375-379(1993) paper:
                     p=             1          2          3
                                   ---        ---        ---
               setup              0.57       0.69       0.73
               1e- ints           1.10       0.87       0.88
               huckel guess      15.77      15.74      16.17
               2e- ints         111.19      55.34      37.42
               RHF cycles       223.13     103.26      79.44
               properties         2.23       2.46       2.63
               2e- ints             --     111.28     110.97
               transformation  1113.67     552.38     381.09
               1e- hess ints     28.20      16.46      14.63
               2e- hess ints   3322.92    1668.86    1113.37
               CPHF            1438.66    1433.34    1477.32
                               -------    -------    -------
               total CPU       6258.01    3961.34    3235.27
               total wall      8623(73%)  5977(66%)  5136(63%)
          so you can see the CPHF is currently a hindrance to full
          scalability of the analytic hessian program.

                        List of parallel broadcast numbers
              GAMESS uses TCGMSG calls to pass messages between the
          parallel processes.  Every message is identified by a
          unique number, hence the following list of how the numbers
          are used at present.  If you need to add to these, look at
          the existing code and use the following numbers as
          guidelines to make your decision.  All broadcast numbers
          must be between 1 and 32767.
               20            : Parallel timing
              100 -  199     : DICTNRY file reads
              200 -  204     : Restart info from the DICTNRY file
              210 -  214     : Pread
              220 -  224     : PKread
              225            : RAread
              230            : SQread
              250 -  265     : Nameio
              275 -  310     : Free format
              325 -  329     : $PROP group input
              350 -  354     : $VEC group input
              400 -  424     : $GRAD group input
              425 -  449     : $HESS group input
              450 -  474     : $DIPDR group input
              475 -  499     : $VIB group input
              500 -  599     : matrix utility routines
              800 -  830     : Orbital symmetry
              900            : ECP 1e- integrals
              910            : 1e- integrals
              920 -  975     : EF and SCRF integrals
              980 -  999     : property integrals
             1000 - 1025     : SCF wavefunctions
             1050            : Coulomb integrals
             1200 - 1215     : MP2
             1300            : localization
             1500            : One-electron gradients
             1505 - 1599     : EF and SCRF gradients
             1600 - 1602     : Two-electron gradients
             1605 - 1615     : One-electron hessians
             1650 - 1665     : Two-electron hessians
             1700            : integral transformation
             1800            : GUGA sorting
             1850 - 1865     : GUGA CI diagonalization
             1900 - 1905     : GUGA DM2 generation
             2000 - 2010     : MCSCF
             2100 - 2120     : coupled perturbed HF

                        Disk files used by GAMESS
          unit  name     contents
          ----  ----     --------
           4   IRCDATA   archive results punched by IRC runs, and
                         restart data for numerical HESSIAN runs.
           5   INPUT     Namelist input file. This MUST be a disk
                         file, as GAMESS rewinds this file often.
           6   OUTPUT    Print output (FT06F001 on IBM mainframes)
                         If not defined, UNIX systems will use the
                         standard output for this printout.
           7   PUNCH     Punch output. A copy of the $DATA deck,
                         orbitals for every geometry calculated,
                         hessian matrix, normal modes from FORCE,
                         properties output, IRC restart data, etc.
           8   AOINTS    Two e- integrals in AO basis
           9   MOINTS    Two e- integrals in MO basis
          10   DICTNRY   Master dictionary, for contents see below.
          11   DRTFILE   Distinct row table file for -CI- or -MCSCF-
          12   CIVECTR   Eigenvector file for -CI- or -MCSCF-
          13   NTNFMLA   Newton-Raphson formula tape for -MCSCF-
          14   CIINTS    Sorted integrals for -CI- or -MCSCF-
          15   WORK15    GUGA loops for diagonal elements;
                         ordered second order density matrix;
                         scratch storage during Davidson diag
          16   WORK16    GUGA loops for off diagonal elements;
                         unordered second order density matrix;
                         2nd order density in AO basis
          17   CSFSAVE   CSF data for transition moments, SOC
          18   FOCKDER   derivative Fock matrices analytic hess
          20   DASORT    Sort file for -MCSCF- or -CI-;
                         also used by HF's DIIS method
          23   JKFILE    J and K "Fock" matrices for -GVB-

          24   ORDINT    sorted AO integrals
          25   EFPIND    effective fragment data

                  Contents of the direct access file 'DICTNRY'
               1. Atomic coordinates
               2. various energy quantities in /ENRGYS/
               3. Gradient vector
               4. Hessian (force constant) matrix
          int  5. ISO - symmetry operations for shells
          int  6. ISOC - symmetry operations for centers (atoms)
               7. PTR - symmetry transformation for p orbitals
               8. DTR - symmetry transformation for d orbitals
               9. not used, reserved for FTR
              10. not used, reserved for GTR
              11. Bare nucleus Hamiltonian integrals
              12. Overlap integrals
              13. Kinetic energy integrals
              14. Alpha Fock matrix (current)
              15. Alpha orbitals
              16. Alpha density matrix
              17. Alpha energies or occupation numbers
              18. Beta Fock matrix (current)
              19. Beta orbitals
              20. Beta density matrix
              21. Beta energies or occupation numbers
              22. Error function extrapolation table
              23. Old alpha Fock matrix
              24. Older alpha Fock matrix
              25. Oldest alpha Fock matrix
              26. Old beta Fock matrix
              27. Older beta Fock matrix
              28. Oldest beta Fock matrix
              29. Vib 0 gradient for FORCE runs
              30. Vib 0 alpha orbitals in FORCE
              31. Vib 0 beta  orbitals in FORCE
              32. Vib 0 alpha density matrix in FORCE
              33. Vib 0 beta  density matrix in FORCE
              34. dipole derivative tensor in FORCE.
              35. frozen core Fock operator
              36. Lagrangian multipliers
              37. floating point part of common block /OPTGRD/
          int 38. integer part of common block /OPTGRD/
              39. ZMAT of input internal coords
          int 40. IZMAT of input internal coords
              41. B matrix of redundant internal coords
              42. not used.
              43. Force constant matrix in internal coordinates.
              44. SALC transformation
              45. symmetry adapted Q matrix
              46. S matrix for symmetry coordinates
              47. ZMAT for symmetry internal coords
          int 48. IZMAT for symmetry internal coords
              49. B matrix
              50. B inverse matrix

              51. overlap matrix in Lowdin basis,
                  temp Fock matrix storage for ROHF
              52. genuine MOPAC overlap matrix
              53. MOPAC repulsion integrals
              54. Coulomb integrals
           55-60. not used
              61. temp MO storage for GVB and ROHF-MP2
              62. temp density for GVB
              63. dS/dx matrix for hessians
              64. dS/dy matrix for hessians
              65. dS/dz matrix for hessians
              66. derivative hamiltonian for OS-TCSCF hessians
              67. partially formed EG and EH for hessians
              68. MCSCF first order density in MO basis
              69. alpha Lowdin populations
              70. beta Lowdin populations
              71. alpha orbitals during localization
              72. beta orbitals during localization
           73-83. not used
              84. d/dx dipole velocity integrals
              85. d/dy dipole velocity integrals
              86. d/dz dipole velocity integrals
           87-88. reserved for effective fragment use
              89. not used
              90. ECP coefficients
          int 91. ECP labels
              92. ECP coefficients
          int 93. ECP labels
              94. bare nucleus Hamiltonian during FFIELD runs
              95. x dipole integrals
              96. y dipole integrals
              97. z dipole integrals
              98. former coords for Schlegel geometry search
              99. former gradients for Schlegel geometry search
              In order to correctly pass data between different
          machine types when running in parallel, it is required
          that a DAF record must contain only floating point values,
          or only integer values.  No logical or Hollerith data may
          be stored.  The final calling argument to DAWRIT and
          DAREAD must be 0 or 1 to indicate floating point or
          integer values are involved.  The records containing
          integers are so marked in the above list.
              Physical record 1 (containing the DAF directory) is
          written whenever a new record is added to the file.  This
          is invisible to the programmer.  The numbers shown above
          are "logical record numbers", and are the only thing that
          the programmer need be concerned with.