Three Ways of Backup

  • Dump (backup) MySQL databases to XML
mysqldump -u $USER --password=$PASSWORD --all-databases --xml > mysql.sql.xml
  • MediaWiki MySQL XML dump
php dumpBackup.php --full > group_wiki.sql.xml
  • Dump MySQL databases using crontab
nice -n 19 mysqldump -u $USER -p$PASSWORD $DATABASE --default-character-set=$CHARSET -c | nice -n 19 gzip -9 > mysql-$DATABASE-$(date '+%Y%m%d').sql.gz
set CHARSET=binary or utf8
set DATABASE=--all-databases or group_wiki or group_web


List User

select * from mysql.user;
select User from mysql.user;

Delete User

delete from mysql.user WHERE User='name';

Show character set

status: Server characterset



Upgrade installed software using Packages

  1. ports-mgmt/portmaster: has no dependency but needs an up-to-date ports tree
    • portmaster -P: use packages, but build port if not available;
    • portmaster -PP: fail if no package is available;
    • portmaster -r: build the specified port and all ports that depend on it;
    • portmaster —clean-distfiles: delete out-dated distfiles not referenced by any installed port.
  2. ports-mgmt/portupgrade: needs the ports tree
    • portupgrade -aP: will upgrade all your packages and build those missing in the latest version from the ports tree;
    • portupgrade -aPP: will upgrade only when packages are available;
    • portmaster -r: act on all those packages depending on the given packages as well;
    • portmaster -R: act on all those packages required by the given packages as well;
    • portupgrade -aD: delete failed distfiles for all packages.
  3. pkg_upgrade in sysutils/bsdadminscripts: does not need the ports tree
  4. Sample workflow:
    • Security patches:
    freebsd-update fetch
    freebsd-update install

    freebsd-update does not change the patch level shown by “uname -a” (such as from 9.0-RELEASE-p3 to 9.0-RELEASE-p5), unless the kernel is also updated. The file “/var/db/freebsd-update/tag” will always contain the actual patch level information.

    • Major and minor version update:
    freebsd-update -r 8.2-RELEASE-p9 upgrade
    portmaster -f #all third party software needs to be rebuilt and re-installed, as they may depend on libraries which have been removed during the upgrade process
    freebsd-update install #tie up all the loose ends in the upgrade process
    • Update Ports Collection:
    portsnap fetch update
    • Upgrade Ports:

    First read /usr/ports/UPDATING for additional steps users may need to perform when updating a port. Then use either portmaster or portupgrade to perform the actual upgrade.

    portupgrade -aP


    portupgrade -PrR [package_name]

System Settings

Disabling the hardware bell/beep

Type the following command to disable for current session:

sysctl hw.syscons.bell=0

Make sure settings remains same after you reboot the laptop, enter:

echo "hw.syscons.bell=0" >> /etc/sysctl.conf


ifconfig_bge0="inet  netmask"
ifconfig_xl0="inet  netmask"


restrict default ignore
restrict -6 ::1
restrict mask nomodify notrap
fudge stratum 10

Rocks cluster

Management of computer clusters

Intelligent Platform Management Interface (IPMI) is usually available for server machines. It can use the dedicated IPMI Ethernet port or share the first LAN port (so make sure the first port is connected to the internal network switch) for remote monitoring and control.

KVM switch can be used for non-server workstations or older machines.

Please refer to the User Manuals page for details on how to use IPMI or KVM. SuperMicro has a suite of Server Management Utilities to perform health monitoring, power management and firmware maintenance (BIOS and IPMI/BMC firmware upgrade). Rocks also bundles the OpenIPMI console interface.


Follow the Users Guide in the Support and Docs section of Rocks cluster’s web site.

  • Reserve a certain amount of disk space for compute nodes that will not be overwritten when reinstalling happens. 20G seems enough for the operating system and software. Remember: the gateway should be!
  • Update kernel to the latest version. Update to newer versions when they come out.
yum --enablerepo base upgrade kernel
yum --enablerepo base upgrade kernel-devel
yum --enablerepo base upgrade kernel-headers
cp /var/cache/yum/base/packages/kernel*.rpm /export/rocks/install/contrib/6.1.1/x86_64/RPMS/
cd /export/rocks/install; rocks create distro

Check if you indeed has the desired version, then kickstart the nodes.

uname -r
while read cn; do rocks run host $cn '/boot/kickstart/cluster-kickstart'; sleep 3; done < <(rocks list host compute|cut -d ':' -f 1)
  • Create user accounts (see Add a user) before installing anything else so that there is less chance that the desired UID/GIDs conflict with software-generated accounts, and set disk quota (see Implement disk quota) to prevent any user inadvertently generating a huge amount of data from affecting the entire system.
  • Install ZFS on Linux (see Use the ZFS file system)
  • Install the most recent Torque roll
rocks add roll /path/to/torque/roll.iso
rocks enable roll torque
cd /export/rocks/install; rocks create distro
rocks run roll torque | sh

Configuring Environment Modules package

It is recommended that modulefiles are stored in a directory shared among all nodes. For example, create the directory under /share/apps, and add it to /usr/share/Modules/init/.modulespath:

mkdir /share/apps/modulefiles
echo "/share/apps/modulefiles" >> /usr/share/Modules/init/.modulespath

Finally, make sure the .modulespath file is broadcast to all nodes (see how to keep files up to date on all nodes using the 411 Secure Information Service).

Using the ZFS file system

Due to the licensing of the software, ZFS on Linux is supplied in source code only even if you have already selected the zfs-linux roll when installing Rocks cluster. Please refer to zfs-linux Roll: Users Guide for how to build the binaries.

  • Create a zpool for each additional hard drive that is not used as the system disk, and create a ZFS file system for each active user with compression, NFS sharing, and quota turned on. Compression with ZFS carries very little overhead and because of the reduced file size it sometimes even improves IO.
zpool create space raidz2 /dev/sda /dev/sdb ... raidz2 /dev/sdp /dev/sdq ... raidz2 sdx sdy ... spare sdz ...
zfs set atime=off space
zfs set compression=gzip space

for u in active_user1 active_user2 ...; do
  zfs create space/$u
  zfs set compression=lz4 space/$u
  zfs set sharenfs=on space/$u
  zfs set quota=100G space/$u
  chown -R $u:$u /space/$u

To make these file systems available as /share/$USER/spaceX, add the following line to the end of /etc/auto.share

* -fstype=autofs,-Dusername=& file:/etc/auto.zfsfs

And create /etc/auto.zfsfs with the following contents, and propagate it using 411.

* -nfsvers=3 cluster.local:/&/${username}

You need to enable the share points on every boot by adding to /etc/rc.d/rc.local the following line:

zfs share -a

For how to enable them automatically, see ZFS Administration, Part XV- iSCSI, NFS and Samba.

NOTE: Sometimes “zfs share -a” does not populate “/var/lib/nfs/etab” and make /share/$USER/space available on other nodes. A work-around is simply to execute “zfs set sharenfs=on space/SOME_USER” on any user before calling “zfs share -a”.

Automatic backup

ZFS uses copy-on-write and, as a result, snapshots can be created very quickly and cheaply. Create the following script as /etc/cron.daily/zfs-snapshot to keep the last 7 daily, 5 weekly, 12 monthly, and 7 yearly backups.


snapshot() {
  local root=$1
  local prefix=$2
  local keep=$3

  zfs list -t filesystem -o name -H -r "$root" | while read fs; do
    [ "$fs" == "$root" ] && continue

    # echo "zfs snapshot $fs@$prefix-$(date '+%Y%m%d')"
    zfs snapshot "$fs@$prefix-$(date '+%Y%m%d')"

    zfs list -t snapshot -o name -s creation -H -r "$fs" | grep "$prefix" | head -n "-$keep" | while read ss; do
      # echo "zfs destroy $ss"
      zfs destroy $ss

snapshot "space" "daily" 7
[ $(date +%w) -eq 0 ] && snapshot "space" "weekly" 5
[ $(date +%-d) -eq 1 ] && snapshot "space" "monthly" 12
[ $(date +%-j) -eq 1 ] && snapshot "space" "yearly" 7

Periodic error checking

Hard drives can have silent data corruption. ZFS can detect and correct these errors on a live system. Create the following script as /etc/cron.monthly/zfs-scrub (or in /etc/cron.weekly if using cheap commodity disks):


zpool scrub space


Add new queues to /etc/slurm/partitions:

PartitionName=E5_2650v4 DEFAULT=YES STATE=UP TRESBillingWeights="CPU=1.0,Mem=0.25G,GRES/gpu=2.0" DefaultTime=60 DefMemPerCPU=512 nodes=compute-0-[0-139]
PartitionName=4170HE DEFAULT=YES STATE=UP TRESBillingWeights="CPU=1.0,Mem=0.25G,GRES/gpu=2.0" DefaultTime=60 DefMemPerCPU=512 nodes=compute-2-[0-31]

And make the following changes in /etc/slurm/slurm.conf:



Finally, update compute node attributes, sync the configuration to all nodes, and set a maximum walltime:

rocks report slurm_hwinfo | sh
rocks sync slurm
sacctmgr modify cluster where cluster=cluster set maxwall=96:00:00

Slurm by default forbids logging in to compute nodes unless the user has jobs running on that node. If this behavior is not desired, disable it by:

rocks set host attr attr=slurm_pam_enable value=false
rocks sync slurm


You can use reservations to drain the cluster for maintenance.

scontrol create reservation starttime=2018-07-06T09:00:00 duration=600 user=root flags=maint,ignore_jobs nodes=ALL

Configuring Torque compute node settings

Edit /var/spool/torque/server_priv/nodes to include node specifications, such as:

compute-0-0 np=8  ntE5-2609 ps2400 E5-26xx
compute-1-0 np=8  ntE5430   ps2660 E54xx
compute-2-0 np=8  ntE5420   ps2500 E54xx
compute-3-0 np=8  ntE5410   ps2330 E54xx
compute-4-0 np=8  ntE5405   ps2000 E54xx np=4 ntE5405 ps2000 E54xx

after which restart pbs_server by executing “service pbs_server restart”. In this example, the prefixes “nt” and “ps” (configured in maui.cfg) are used to denote node type and processor speed information.

Making your frontend run queued jobs for PBS (Torque/Maui)

If you have installed the Torque roll, issue the following commands as root on the frontend.

The first line setting $frontend just assures that the name matches that returned by /bin/hostname (which is generally the FQDN). They must match, or pbs_mom will refuse to start/work.

The next two lines set the number of cores to be used for running jobs. You probably should reserve a few cores for all the Rocks overhead processes, and for interactive logins, compiling, etc. In this example, we save 4 cores for the overhead and assign the rest for jobs. This is accomplished by setting the “np = $N” (np means number of processors) value.

export frontend=`/bin/hostname`
export N=`cat /proc/cpuinfo | grep processor | wc -l`
export N=`expr $N - 4` # reserve 4 cores
qmgr -c "create node $frontend"
qmgr -c "set node $frontend np = $N"
qmgr -c "set node $frontend ntype=cluster"
service pbs_server restart

Alternatively, you can edit /opt/torque/server_priv/nodes by hand, and do “service pbs_server restart” to make it re-read the file. Next, make sure pbs_mom is started on the frontend:

scp compute-0-0:/etc/pbs.conf /etc
chkconfig --add pbs_mom
service pbs_mom start

If you have no compute nodes, you can create /etc/pbs.conf by hand. It should look like this:


You should now be able to see the frontend listed in the output of “pbsnodes -a”, and any jobs submitted to the queue will run there.

Creating additional queues in Torque

Run the following commands as root to create two queues, E5-26xx and E54xx, which include only nodes with the corresponding features, as can be defined in /var/spool/torque/server_priv/nodes (see Configure Torque compute node settings).

qmgr -c "create queue E5-26xx queue_type=execution,started=true,enabled=true,resources_max.walltime=360:00:00,resources_default.walltime=24:00:00,resources_default.neednodes=E5-26xx"
qmgr -c "create queue E54xx queue_type=execution,started=true,enabled=true,resources_max.walltime=360:00:00,resources_default.walltime=24:00:00,resources_default.neednodes=E54xx"

NOTE: Separate queues are not necessary for requesting jobs to be run on certain machines. Similar effect can be accomplished by specifying node features in the submission script, for example:

#PBS -l nodes=1:E5-26xx:ppn=1

Configuring Maui scheduler behavior

Change the settings in /opt/maui/maui.cfg to the following, and add the parameters if not already present. Restart maui to incorporate the changes: service maui restart

# Job Prioritization:

XFACTORWEIGHT         86400
XFMINWCLIMIT          00:15:00
FSWEIGHT              86400

# Fairshare:

FSDEPTH               7
FSINTERVAL            1:00:00:00
FSDECAY               0.80

# Backfill:


# Node Allocation:


# Creds:


# Node Set:

NODESETDELAY          0:00:00
NODESETLIST           E5-26xx E54xx

# Node Attributes:



Basic settings

To implement wall time limit (specify “+WallTime = SECONDS” in the job submission file), default file system behavior, and ignore console activity, create /opt/condor/etc/config.d/98Rocks.conf with the following contents and propagate it using 411:

DefaultWallTime = 12 * $(HOUR)
EXECUTE = /state/partition1/condor_jobs
MaxWallTime = 96 * $(HOUR)
SLOT_TYPE_1 = 100%
START = ifThenElse(isUndefined(WallTime), $(DefaultWallTime), WallTime) <= $(MaxWallTime)
SYSTEM_PERIODIC_REMOVE = RemoteUserCpu + RemoteSysCpu > CpusProvisioned * ifThenElse(isUndefined(WallTime), $(DefaultWallTime), WallTime) || \
                         RemoteWallClockTime > ifThenElse(isUndefined(WallTime), $(DefaultWallTime), WallTime)

Then create the job directory on all compute nodes:

rocks run host command='mkdir -p /state/partition1/condor_jobs; chmod 755 /state/partition1/condor_jobs'

MPI jobs

Enable MPI:

rocks set attr Condor_EnableMPI true
rocks sync host condor frontend compute

Put the following files (named and in $MPIHOME/bin directory:


## This is a script to run openmpi jobs under the Condor parallel universe.
## Collects the host and job information into $_CONDOR_PARALLEL_HOSTS_FILE
## and executes
##   $MPIRUN --prefix $MPI_HOME --hostfile $_CONDOR_PARALLEL_HOSTS_FILE $@
## command
## The default value of _CONDOR_PARALLEL_HOSTS_FILE is 'parallel_hosts'
## The script assumes:
##  On the head node (_CONDOR_PROCNO == 0) :
##    * $MPIRUN points to the mpirun command
##    * is in $PATH.
##  On all nodes:
##    * openmpi is installed into $MPI_HOME directoy


_CONDOR_LIBEXEC=`condor_config_val libexec`

# Creates parallel_hosts file containing contact info for hosts
# Returns on head node only
if [ $ret -ne 0 ]; then
    echo Error: $ret creating $_CONDOR_PARALLEL_HOSTS_FILE
    exit $ret

# Starting mpirun cmd
#exec $MPIRUN --prefix $MPI_HOME --mca orte_rsh_agent $_CONDOR_SSH_TO_JOB_WRAPPER --hostfile $_CONDOR_PARALLEL_HOSTS_FILE $@
exec $MPIRUN --prefix $MPI_HOME --hostfile $_CONDOR_PARALLEL_HOSTS_FILE -map-by core -bind-to core --tmpdir $_CONDOR_TEMP_DIR $@


## This script collects host and job information about the running parallel job,
## and creates a hostfile including contact info for remote hosts

## Helper fn for getting specific machine attributes from $_CONDOR_MACHINE_AD
    local attr="$1"
    awk '/^'"$attr"'[[:space:]]+=[[:space:]]+/ \
        { ret=sub(/^'"$attr"'[[:space:]]+=[[:space:]]+/,""); print; } \
        END { exit 1-ret; }' $_CONDOR_MACHINE_AD
    return $?

## Helper fn for getting specific job attributes from $_CONDOR_JOB_AD
function CONDOR_GET_JOB_ATTR() {
    local attr="$1"
    awk '/^'"$attr"'[[:space:]]+=[[:space:]]+/ \
        { ret=sub(/^'"$attr"'[[:space:]]+=[[:space:]]+/,""); print; } \
        END { exit 1-ret; }' $_CONDOR_JOB_AD
    return $?

## Helper fn for printing the host info
    local clusterid=$1
    local procid=$2
    local reqcpu=$3
    local rhosts=$4
    # tr ',"' '\n' <<< $rhosts | /bin/grep -v $hostname | \
    tr ',"' '\n' <<< $rhosts | \
    awk '{ sub(/slot.*@/,""); if ($1 != "") { slots[$1]+='$reqcpu'; subproc[$1]=id++; } } \
        END { for (i in slots) print i" slots="slots[i]" max_slots="slots[i]; }'
        #END { for (i in slots) print i"-CONDOR-"'$clusterid'".1."subproc[i]" slots="slots[i]" max_slots="slots[i]; }'

# Defaults for error testing
: ${_CONDOR_JOB_AD:="None"}

## If hostfile omitted 'parallel_hosts' is used.
## Return:
##   The function returns with error status on main process (_CONDOR_PROCNO==0).
##   The function never returns on on the other nodes (sleeping).
## The created file structure:
##   HostName1'-CONDOR-'CLusterID.ProcId.SubProcId 'slots='Allocated_CPUs 'max_slots='Allocated_CPUs
##   HostName2'-CONDOR-'CLusterID.ProcId.SubProcId 'slots='Allocated_CPUs 'max_slots='Allocated_CPUs
##   HostName3'-CONDOR-'CLusterID.ProcId.SubProcId 'slots='Allocated_CPUs 'max_slots='Allocated_CPUs
##   ...
    # getting parameters if _CONDOR_PARALLEL_HOSTS_FILE not set
    # setting defaults
    : ${_CONDOR_PARALLEL_HOSTS_FILE:=parallel_hosts}
    #local hostname=`hostname -f`
    if [ $_CONDOR_PROCNO -eq 0 ]; then
    # collecting info on the main proc
        #clusterid=`CONDOR_GET_JOB_ATTR ClusterId`
        #local ret=$?
        #if [ $ret -ne 0 ]; then
        #    echo Error: get_job_attr ClusterId
        #    return 1
        #local line=""
        #condor_q -l $clusterid | \
        cat $_CONDOR_JOB_AD | \
        awk '/^ProcId.=/ { ProcId=$3 } \
             /^ClusterId.=/ { ClusterId=$3 } \
             /^RequestCpus.=/ { RequestCpus=$3 } \
             /^RemoteHosts.=/ { RemoteHosts=$3 } \
             END { if (ClusterId != 0) print ClusterId" "ProcId" "RequestCpus" "RemoteHosts  }' | \
        while read line; do
            CONDOR_PRINT_HOSTS $line
        done | sort -d > ${_CONDOR_PARALLEL_HOSTS_FILE}
    # endless loop on the workers
        while true ; do
            sleep 30
#    return 0

To request a parallel job, add the following to the job submission script:

machine_count = NODES
request_cpus = CORES_PER_NODE
universe = parallel

And use instead of mpirun for parallel execution.



qconf -mconf

, and make the following changes:

min_uid                      500
min_gid                      500
execd_params                 ENABLE_ADDGRP_KILL=true
auto_user_fshare             1000
auto_user_delete_time        0


qconf -msconf

, and make the following changes:

job_load_adjustments              NONE
load_adjustment_decay_time        0
weight_tickets_share              10000
weight_ticket                     10000.0


qconf -mq all.q

, and make the following changes:

load_thresholds       NONE
h_rt                  96:00:00

Create a file (say “tmp_share_tree”):


And use it to create a share tree fair share policy:

qconf -Astree tmp_share_tree && rm tmp_share_tree


Kill zombie jobs

SGE sometimes fails to kill all processes of a job. Use the following script to clean up these zombie processes (as well as rogue sessions by users who directly ssh to compute nodes):


launcher_pid=($(gawk 'NR==FNR{shepherd_pid[$0];next} ($1 in shepherd_pid){print $2}' <(pgrep sge_shepherd) <(ps -eo ppid,pid --no-headers)))
# Assume regular users have UIDs >=600
rogue_pid=($(gawk 'NR==FNR{launcher_pid[$0];next} ($1>=600)&&(!($2 in launcher_pid)){print $3}' <(printf "%s\n" "${launcher_pid[@]}") <(ps -eo uid,sid,pid --no-headers)))

# Do not allow any rogue processes if there are >1 jobs running on the
# same node; if a single job has the entire node, then allow the job
# owner to run unmanaged processes, while making sure that zombie
# processes from this user are still killed; if no jobs are running,
# then allow unmanaged processes (e.g., testing)
[ ${#launcher_pid[@]} -eq 0 ] && exit 0
uid=($(ps -p "$(echo ${launcher_pid[@]})" -o uid= | sort | uniq))
if [ ${#uid[@]} -gt 1 ]; then
  # echo ${rogue_pid[@]}
  kill -9 ${rogue_pid[@]}
elif [ ${#uid[@]} -eq 1 ]; then
  stime=$(gawk '{print $22}' /proc/${launcher_pid[0]}/stat)
  for (( i=0; i<${#rogue_pid[@]}; i++ )); do
    rogue_uid=$(ps -p ${rogue_pid[i]} -o uid=)
    if [ -n "$rogue_uid" ] && { [ $rogue_uid -ne $uid ] || [ $(gawk '{print $22}' /proc/${rogue_pid[i]}/stat) -lt $stime ]; }; then
      # echo ${rogue_pid[i]}
      kill -9 ${rogue_pid[i]}

It can be enforced as a system cron job, by adding into extend-compute.xml, between “” and “”:

<file name="/etc/cron.d/kill-zombie-jobs" perms="0600">
*/15 * * * * root /opt/gridengine/util/

Remember to escape ampersand, quotes, and less than characters if you use extend-compute.xml to create this script.

Disabling hyper-threading

Based on some crude benchmarks, Intel Hyper-Threading appears to be detrimental to CPU-intensive work loads. It can be turned off in BIOS via IPMI, but if there are too many nodes and IPMI does not allow scripting the command, an alternative exists by extending compute nodes. First figure out the CPU layout using the lstopo program of hwloc, and add the following between “” and “” in extend-compute.xml (assuming 24—47 are virtual cores):

<file name="/etc/rc.d/rocksconfig.d/post-89-disable-hyperthreading" perms="0755">
for i in {24..47}; do echo 0 > /sys/devices/system/cpu/cpu$i/online; done

Installing Software

After installing a new software package, add an entry, either a single file or a directory named some_software, in the directory /share/apps/modules/modulefiles. In the case of multiple files (representing different software versions) existing in the directory, create file .version to specify the default version.

Using Rocks Rolls

Refer to the Roll Developer’s Guide in the Support and Docs section of Rocks cluster’s web site for how to create your own Rolls.

rocks set host attr localhost roll_install_on_the_fly true shadow=yes # for installing Service Pack Rolls
rocks add roll /path/to/rollname.iso
rocks enable roll rollname
cd /export/rocks/install; rocks create distro
rocks run roll rollname | sh

After the the frontend comes back up you should do the following to populate the node list:

rocks sync config

then kickstart all your nodes

while read cn; do rocks run host $cn '/boot/kickstart/cluster-kickstart'; sleep 3; done < <(rocks list host compute|cut -d ':' -f 1)

If installing Service Pack Rolls, it is critical that you run cluster-kickstart-pxe as it will force the compute nodes to PXE boot. It is important that you PXE boot the nodes for the first install, because with a PXE boot based install, the nodes with get their initrd from the frontend and inside the initrd is a new tracker-client that is compatible with the new tracker-server. After the first install, a new initrd will be on the hard disk of the installed nodes and then it is safe to run /boot/kickstart/cluster-kickstart.

while read cn; do rocks run host $cn '/boot/kickstart/cluster-kickstart-pxe'; sleep 3; done < <(rocks list host compute|cut -d ':' -f 1)

Using YUM repositories

Several YUM repositories are configured but disabled by default. Add “—enablerepo=REPO_NAME” to yum commands to temporarily enable REPO_NAME.

yum repolist all #Display all configured software repositories
yum clean all #clean cache
yum [--enablerepo=REPO_NAME] check-update #update package information
yum list openmotif* #list packages
yum install openmotif openmotif-devel #requirement for Grace and NEdit

Adding a software package distributed as RPMs

Create a roll first:

cd /export/rocks/install/contrib/5.4/x86_64/RPMS
wget http://url/to/some_software.rpm
cd /export/rocks/install/site-profiles/5.4/nodes
cp skeleton.xml extend-compute.xml

Edit extend-compute.xml, remove unused “” lines

cd /export/rocks/install; rocks create distro

Now reinstall the compute nodes:

while read cn; do rocks run host $cn '/boot/kickstart/cluster-kickstart-pxe'; sleep 3; done < <(rocks list host compute|cut -d ':' -f 1)

Adding a software application distributed as source code

Install it into the /share/apps/some_software directory. A typical process is shown below:

wget http://url/to/some_software.tar.bz2
tar xjf some_software.tar.bz2 -C some_software
cd some_software
./configure --prefix=/share/apps/some_software
make -j 8
sudo make install clean

Uninstalling Software

Removing Rolls

rocks disable roll rollname
rocks remove roll rollname
cd /export/rocks/install; rocks create distro
rocks sync config
while read cn; do rocks run host $cn '/boot/kickstart/cluster-kickstart'; sleep 3; done < <(rocks list host compute|cut -d ':' -f 1)


  • Create an update roll:
rocks create mirror rollname=CentOS_6_X_update_$(date '+%Y%m%d')
rocks create mirror  rollname=Centos_6_X

X should be the current minor release number (i.e., X should be 10 if latest stable version of Centos is 6.10).

Add the created update roll created to the installed distribution

rocks add roll CentOS_6_X_update_$(date '+%Y%m%d')-*.iso
rocks add roll Centos_6_X-*.iso
rocks enable roll Centos_6_X
rocks enable roll CentOS_6_X_update_$(date '+%Y%m%d')
cd /export/rocks/install; rocks create distro
  • New installed nodes will automatically get the updated packages. It is wise to test the update on a compute nodes to verify that updates did not break anything. To force a node to reinstall, run the command:
rocks run host compute-0-0 /boot/kickstart/cluster-kickstart

If something goes wrong you can always revert the updates removing the update roll.

rocks remove roll CentOS_6_X_update_$(date '+%Y%m%d')
rocks remove roll Centos_6_X
cd /export/rocks/install; rocks create distro
  • After you tested the update on some nodes with the previous step, you can update the frontend using the standard yum command
yum update

Updating zfs-linux

Use the opportunity of the kernel update to rebuild and reinstall zfs-linux by following the steps on Users Guide: Updating the zfs-linux Roll:

cd ~/tools
git clone
make binary-roll

rocks remove roll zfs-linux
rocks add roll zfs-linux*.iso
rocks enable roll zfs-linux
cd /export/rocks/install; rocks create distro

zfs umount -a
service zfs stop
rmmod zfs zcommon znvpair zavl zunicode spl zlib_deflate

rocks run roll zfs-linux | sh

Additional notes for Rocks 6

Apache httpd updates on Rocks 6 break the 411 service that runs on unencrypted HTTP protocol. Fix with the following:

echo 'HttpProtocolOptions Unsafe' >> /etc/httpd/conf/httpd.conf
service httpd restart


Create a Restore Roll that will contain site-specific info and can be used to upgrade or reconfigure the existing cluster quickly.

cd /export/site-roll/rocks/src/roll/restore
make roll


Adding a user

  • /usr/sbin/useradd -u UID USERNAME creates the home directory in /export/home/USERNAME (based on the settings in /etc/default/useradd) with UID as the user ID. If the desired user ID or the group ID has already been used, change them using:
# or
  • rocks sync users adjusts all home directories that are listed as /export/home as follows:
  1. edit /etc/password, replacing /export/home/ with /home/
  2. add a line to /etc/auto.home pointing to the existing directory in /export/home
  3. 411 is updated, to propagate the changes in /etc/passwd and /etc/auto.home

In the default Rocks configuration, /home/ is an automount directory. By default, directories in an automount directory are not present until an attempt is made to access them, at which point they are (usually NFS) mounted. This means you CANNOT create a directory in /home/ manually! The contents of /home/ are under autofs control. To “see” the directory, it’s not enough to do a ls /home as that only accesses the /home directory itself, not its contents. To see the contents, you must ls /home/username.

Implementing disk quota

  • Edit /etc/fstab, look for the partitions you want have quota on (“LABEL=” or “UUID=”), and change “defaults” to “grpquota,usrquota,defaults” in that line.
  • Reboot, check quota state and turn on quota:
quotacheck -guvma
quotaon -guva
  • Set up a prototype user quota:
edquota -u PROTOTYPE_USER # -t DAYS to edit the soft time limits
  • Duplicate the quotas of the prototypical user to other users:
edquota -p PROTOTYPE_USER -u user1 user2 ...
  • To get a quota summary for a file system:
repquota /export

Exporting a new directory from the frontend to all the compute nodes

  • Add the directory you want to export to the file /etc/exports.

For example, if you want to export the directory /export/scratch1, add the following to /etc/exports:


This exports the directory only to nodes that are on the internal network (in the above example, the internal network is configured to be

  • Restart NFS:
/etc/rc.d/init.d/nfs restart
  • Add an entry to /etc/auto.home (or /etc/auto.share).

For example, say you want /export/scratch1 on the frontend machine (named frontend-0) to be mounted as /home/scratch1 (or /share/scratch1) on each compute node. Add the following entry to /etc/auto.home (or /etc/auto.share):

scratch1 frontend-0:/export/scratch1


scratch1 frontend-0:/export/&
  • Inform 411 of the change:
make -C /var/411

Now when you login to any compute node and change your directory to /home/scratch, it will be automounted.

Keeping files up to date on all nodes using the 411 Secure Information Service

Add the files to /var/411/, and execute the following:

make -C /var/411
rocks run host command="411get --all" #force all nodes to retrieve the latest files from the frontend immediately

Removing old log files to prevent /var filling up

Place the following in /etc/cron.daily:


rm -f /var/log/*-20??????
rm -f /var/log/slurm/*.log-*
rm -f /var/lib/ganglia/archives/ganglia-rrds.20??-??-??.tar

Cleaning up temporary directories on compute nodes

Add a system cron job between “” and “” in extend-compute.xml:

<file name="/etc/cron.weekly/clean-scratch" perms="0700">
find /tmp /state/partition1 -mindepth 1 -mtime +7 -type f ! -wholename /state/partition1/condor_jobs -exec rm -f {} \;
find /tmp /state/partition1 -mindepth 1 -depth -mtime +7 -type d ! -wholename /state/partition1/condor_jobs -exec rmdir --ignore-fail-on-non-empty {} \;

This will be picked up by /etc/anacrontab or /etc/cron.d/0hourly.

Managing firewall

The following rules allow access to the web server from UMN IPs:

rocks remove firewall host=cluster rulename=A40-HTTPS-PUBLIC-LAN
rocks add firewall host=cluster rulename=A40-HTTPS-PUBLIC-LAN service=https protocol=tcp chain=INPUT action=ACCEPT network=public flags='-m state --state NEW --source,,,,'
rocks remove firewall host=cluster rulename=A40-WWW-PUBLIC-LAN
rocks add firewall host=cluster rulename=A40-WWW-PUBLIC-LAN service=www protocol=tcp chain=INPUT action=ACCEPT network=public flags='-m state --state NEW --source,,,,'
rocks sync host firewall cluster

These add a few national labs to the allowed IPs for SSH:

rocks remove firewall global rulename=A20-SSH-PUBLIC
rocks add firewall global rulename=A20-SSH-PUBLIC service=ssh protocol=tcp chain=INPUT action=ACCEPT network=public flags='-m state --state NEW --source,,,,,,,,,,,'
rocks sync host firewall

Alternatively, install DenyHost, which will read the log file for SSH authentication failures and then add the offending IPs to /ect/hosts.deny.

yum --enablerepo=epel install denyhosts
chkconfig denyhosts on
service denyhosts start
vim /etc/denyhosts.conf # configuration file

Changing the public IP address on the frontend

It is strongly recommended that the Fully-Qualified Host Name (e.g., be chosen carefully and never be modified after the initial setup, because doing so will break several cluster services (e.g., NFS, AutoFS, and Apache). If you want to change the public IP address, you can do so by:

rocks set host interface ip frontend iface=eth1
rocks set attr Kickstart_PublicAddress
# Edit the IP address in /etc/hosts, /etc/sysconfig/network-scripts/ifcfg-eth1, /etc/yum.repos.d/rocks-local.repo
# It's important to enter the following commands in one line if you are doing this remotely, as the network interface will be stopped by the first command.
ifdown eth1; ifup eth1
rocks sync config
rocks sync host network



The MediaWiki web site has an installation guide specifically for FreeBSD under “System-specific instructions” ( Read and follow this article. The following contains a summary of the steps involved.

  1. Install Apache, PHP, MySQL, and MediaWiki Using Packages:
    pkg_add -r apache22 mysql-server php5 php5-mysql mediawiki
    pkg_add -r php5-ctype #ParserFunctions extension for template logic
    pkg_add -r tidy #correct garbage HTML tags
    pkg_add -r inkscape #allow SVG upload and to enable thumbnail and preview rendering with Inkscape
    pkg_add -r php5-xmlreader #extract meta data from SVG files using the XMLReader PHP extension
    pkg_add -r ocaml ImageMagick ghostscript8 teTeX #TeX support

    Compile texvc:

    cd /usr/local/www/mediawiki/math/

    Reinstall php5 to include mod-php:

    cd /usr/ports/lang/php5
    make config
    make deinstall
    make reinstall

    During the make config step above, enable the “APACHE” option in the menu.

  2. Further configuration of PHP Some parameters of the wiki, such as maximum upload size and maximum PHP execution time must be set in PHP itself. Edit (or create) the file /usr/local/etc/php.ini:
    upload_max_filesize = 500M
    post_max_size = 500M
    session.save_path = "/var/lib/phpsess"
    max_execution_time = 600

    The temporary directory for PHP sessions, /var/lib/phpsess in this sample setup, should be owned by user www and have permissions 733. When you make changes to php.ini, you have to restart Apache for the changes to take effect:

    apachectl restart
  3. Set up MySQL
    • Check if the database server is running:
    mysqladmin status

    If it is not run mysqld_safe to start it:

    /mysqld_safe &

    Another way to start initially the MySQL server is to run the configure script available at the root of the installation. It creates the initial tables and automatically starts the mysql daemon

    • Set a password for the “root” account on your database server:
    mysqladmin -u root password new_root_password
    history -c
    • Run the MySQL command-line client:
    mysql -u root -p

    This will prompt for the “root” database password you just set, then connect to the MySQL server. Now, continue with the SQL commands below:

    create database group_wiki;
    grant index, create, select, insert, update, delete, alter, lock tables on group_wiki.* to 'wikiuser'@'localhost' identified by 'new_wikiuser_password';
    flush privileges;

    Start mysql-server:

    /usr/local/etc/rc.d/mysql-server onestart
  4. Set up Apache Add the following sections to /usr/local/etc/apache22/httpd.conf:
    LoadModule php5_module libexec/apache/
    <IfModule php5_module>
      DirectoryIndex index.php index.html
      AddType application/x-httpd-php .php
      AddType application/x-httpd-php-source .phps

    If you want to serve MediaWiki content exclusively, you can simply change the Apache document root to the MediaWiki installation directory:

    DocumentRoot "/usr/local/www/mediawiki"

    Also add an entry for the MediaWiki directory to httpd.conf:

    <Directory "/usr/local/www/mediawiki">
      Options Indexes FollowSymLinks
      DirectoryIndex index.php index.html
      AllowOverride None
      Order allow,deny
      Allow from all

    Restart Apache:

    apachectl restart
  5. Edit /etc/rc.conf Make sure your hostname in rc.conf contains a domain name, and that Apache and MySQL are set to start automatically during boot:
  6. Run the MediaWiki web installer Create the images directory for storing uploads if it does not exist yet:
    mkdir mediawiki/images

    Make sure all MediaWiki files belong to user www:

    chown -R www mediawiki

    Open http://localhost/mediawiki/mw-config/index.php in your browser to start installation. Enter root and your MySQL root password in the superuser name and password fields. After the installer has finished successfully, copy the resulting LocalSettings.php file to the MediaWiki root directory.

  7. Edit /usr/local/www/mediawiki/LocalSettings.php Change the relevant variables:
    $wgSitename, $wgMetaNamespace = "Siepmann Group Wiki";
    $wgLogo = "$wgScriptPath/where_the_logo_is.gif";
    $wgEmergencyContact, $wgPasswordSender = "";
    $wgDBname = "group_wiki";
    $wgDBuser = "wikiadmin";
    $wgDBpassword = "password";
    $wgDBadminuser = "wikiadmin";
    $wgDBadminpassword = "password";
    $wgEnableUploads = true;
    # User-added options
    $wgUseTidy = true;
    $wgFileExtensions = array_merge($wgFileExtensions, array('svg','tar','tar.gz','tgz','tar.bz2','tbz','tar.Z','tex','pdf','pptx','ppt','docx','doc','xlsx','xls','f','F','for','FOR','fpp','FPP','f90','F90','f95','F95','c','cpp','cxx','cc'));
    $wgSVGConverters = array('Inkscape' => '/usr/local/bin/inkscape -z -w $width -f $input -e $output',);
    $wgSVGConverter = 'Inkscape';
    $wgUseTeX = true;
    $wgJobRunRate = 0;
    $wgExternalLinkTarget = '_blank';
    $wgMimeTypeBlacklist = array(
       # HTML may contain cookie-stealing JavaScript and web bugs
       'text/html', 'text/javascript', 'text/x-javascript',  'application/x-shellscript',
       # PHP scripts may execute arbitrary code on the server
       'application/x-php', 'text/x-php',
       # Other types that may be interpreted by some servers
       'text/x-python', 'text/x-perl', 'text/x-bash', 'text/x-sh', 'text/x-csh',
       # Client-side hazards on Internet Explorer
       'text/scriptlet', 'application/x-msdownload',
       # Windows metafile, client-side vulnerability on some systems
       # A ZIP file may be a valid Java archive containing an applet which exploits the
       # same-origin policy to steal cookies
       # MS Office OpenXML and other Open Package Conventions files are zip files
       # and thus blacklisted just as other zip files. If you remove these entries
       # from the blacklist in your local configuration, a malicious file upload
       # will be able to compromise the wiki's user accounts, and the user
       # accounts of any other website in the same cookie domain.
  8. Install extensions
    • wikEd (or CKEditor): add the complete version of the installation code to the MediaWiki:Common.js page.
    // install [[Wikipedia:User:Cacycle/wikEd]] in-browser text editor
    + '&action=raw&ctype=text/javascript');
    • GeSHi: download and extract it to the extensions directory.
    • GeSHiCodeTag: create a new file in the extensions directory called GeshiCodeTag.php which should contain the source code, replace the line:
    $languagesPath = "extensions/geshi/geshi";


    $languagesPath = "$IP/extensions/geshi/geshi";

    in the file, and finally add the following line to LocalSettings.php.

    ## GeshiCodeTag extension
    • SyntaxHighlight GeSHi: download and extract it to the extensions directory, replace the line:
    require( 'geshi/geshi.php' );


    require( '../geshi/geshi.php' );

    in SyntaxHighlight_GeSHi.class.php, and finally add the line:


    to LocalSettings.php.


Software files will be updated when updating FreeBSD (such as by portupgrade), but the database should be checked afterwards:

cd /usr/local/www/mediawiki/maintenance
php update.php

Certain extensions may need to be updated manually, and LocalSettings.php may also need to be adjusted as documented in the “configuration changes” section of the release notes. It may be necessary to place the database in read-only mode first which prevents user edits:

$wgReadOnly = 'System upgrading';


Remember to backup both the database and file system.



Assuming MediaWiki has already been installed, this description only deals with the Drupal specific parts.

  1. Install Drupal7
    pkg_add -r drupal7 php5-gd
  2. Create the configuration file and grant permissions
    cp sites/default/default.settings.php sites/default/settings.php

    Give the web server write privileges (666 or u=rw,g=rw,o=rw) to the configuration file:

    chmod a+w sites/default/settings.php

    Give the web server write privileges to the sites/default directory:

    chmod a+w sites/default
  3. PHP settings
    Edit /usr/local/etc/php.ini:
    apc.rfc1867 = 1

    Restart Apache for the changes to take effect:

    apachectl restart
  4. Create the Drupal database
    mysqladmin -u root -p create group_web
    mysql -u root -p

    At the MySQL prompt, enter:

  5. Set up Apache
    Add an alias for the Drupal directory between and
    Alias /drupal7 /usr/local/www/drupal7

    Also add an entry for the Drupal directory to httpd.conf:

    <Directory "/usr/local/www/drupal7">
       Options Indexes FollowSymLinks
       DirectoryIndex index.php index.html
       AllowOverride None
       Order allow,deny
       Allow from all

    Restart Apache:

    apachectl restart
  6. Run the installation script Point your browser to the Drupal site: http://localhost/drupal7, and follow the wizard to set up the database, add the first user account, and provide basic website settings.
  7. Install extensions
    • Download the following modules from Drupal Module project: wysiwyg, biblio, media(which requires ctools, views, file_entity), filedepot (which requires libraries).
    • Install by extracting them to site/all/modules and enabling them in Administration > Modules (see Installing modules and themes for more information).
    • Go to Administration > Modules, review the Permissions settings for the newly installed modules and modify them as necessary.
    • Go to Administration > Modules > Wysiwyg > Configure, and follow the instructions to install CKEditor.
    • Follow the instructions in the README file for filedepot to set it up.


Log in as an administrator user, go to Administration > Configuration > Development > Maintenance mode. Enable the “Put site into maintenance mode” checkbox and save the configuration. Update software files by updating FreeBSD (such as using portupgrade), and run update.php through the browser to update the database.


Remember to backup both the database and file system. You can also create a backup tarball at administer >> content >> backup.

Installing Linux on Microsoft Windows using VirtualBox

We like Mac machines because their operating system is essentially a BSD variant that can work seamlessly with the Unix world and is popular enough that companies like Microsoft and Adobe would care to develop versions of their products specifically for the platform. Nice as this is, when it comes to buying computers ourselves, we may not always want to buy Mac for various reasons and as a result we have to resort to work-around methods like this article describes, to get Unix stuff work under Microsoft Windows without the need for dual-booting. Windows is picked as the host OS here because most vendor-provided hardware drivers are only available for Windows, and therefore graphics performance, battery life, etc. tend to be better. For more information, such as Ubuntu gets inside Windows, and how to use Linux as the host OS and run Windows both within Linux and native:Windows 7: In both VM and native (using Linux as host OS), HOWTO: Windows XP in both VM and native, and Taming Windows 7 in a VirtualBox VM Using Raw Disk Access, and more on VDIs: Tutorial: All about VDIs and HOWTO: manage VDIs and import native installations.

  1. To allow multiple operating systems to share the processor resources efficiently and safely, many processors now support hardware virtualization. You will need to enable this in BIOS. You can enter the BIOS configuration often by pressing the ESC, DEL, F2, or F12 key during system booting. For Intel chips, the name for the hardware virtualization option is often called “Intel’s VT-x”. To do this in Windows 8, follow the steps 1
  2. Download a copy (ISO image) of the operating system you want to install, such as Ubuntu. The 64 bit version is used here for demonstration purposes.
  3. Download and install VirtualBox and, optionally, the Extension Pack.
  4. After installation, run VirtualBox as administrator, click button “New” to create a new virtual machine.
  5. You can choose whatever Name for the virtual machine. For OS Type, choose Linux as Operating System, and Ubuntu (64 bit) as Version.
  6. Depending on the total amount of memory, select an appropriate amount of memory you want the virtual machine to be able to use.
  7. If this is the first time you create a new virtual machine, you will need to create a virtual hard drive to be used in the guest OS.
    • In the Virtual Disk Creation wizard that pops up, choose VDI as virtual hard disk file type. The other file types are from other virtual machine software programs; for details, see Chapter 5. Virtual storage#Disk image files (VDI, VMDK, VHD, HDD).
    • You can allow the virtual disk file to grow in size dynamically as it fills up, or to allocate a fix amount of space at its creation. The guest OS will always see the hard disk with a size you specify in the next page, no matter which option you choose. It is only a matter of efficiency.
    • Choose the location to save the virtual disk file and select its size. For a typical Linux installation with additional software we are going to use, the default size should be sufficient. However, selecting a larger size such as 20 or 40 GB could be safer.

    ADVANCED: if you want to be able to both run Ubuntu in VirtualBox under Windows and boot into native Ubuntu or if you already have a dual-boot setup and want to import the Ubuntu installation to VirtualBox, you will need to use raw disks/partitions as virtual disks. Read more about Using a raw host hard disk from a guest

    • Check which partitions you will be using:
    C:\Program Files\Oracle\VirtualBox> VBoxManage.exe internalcommands listpartitions -rawdisk \\.\PhysicalDrive0
    • Create vmdk for the whole disk
    C:\Program Files\Oracle\VirtualBox> VBoxManage.exe internalcommands createrawvmdk -filename C:\somewhere\ubuntu.vmdk -rawdisk \\.\PhysicalDrive0 -register

    or create vmdk for select partitions

    C:\Program Files\Oracle\VirtualBox> VBoxManage.exe internalcommands createrawvmdk -filename C:\somewhere\ubuntu.vmdk -rawdisk \\.\PhysicalDrive0 -partitions 3,5,6 -register
    • Make a bootable GRUB CD to be used during the booting phase of the guest OS:

    mkdir -p iso/boot/grub cp /boot/grub/grub.cfg iso/boot/grub grub-mkrescue -o grub.iso iso And copy “grub.iso” to your host OS and mount it as CD during booting (either directly to a shared folder or to a USB flash drive, etc.).

  8. Click “Settings”, select “Storage” from the left panel, you can choose the Ubuntu iso image to be mounted as CD/DVD (often the entry belonging to Controller: IDE) for the guest OS. Save the settings.
  9. Click “Start” to boot the guest OS, now consider what happens inside VirtualBox as happening in another real machine, and install the guest OS as you would for a real machine. This means you might need to press the ESC, DEL, F2, or F12 key to make it boot from CD.
  10. After installing the guest OS, boot into it, and then select Devices -> Install Guest Additions from the menu of the VirtualBox window containing the running guest system. This will install software that will offer the guest OS a few nice features, see Chapter 4. Guest Additions.
  11. After installing the Guest Additions, you can select Devices -> Shared Folders to add folders from your host OS (Windows) that you want the guest OS (Ubuntu) to be able to access. You can make them permanent and auto-mount. To see if auto-mount works, use “mount” or “df -h” and check their output. HOWTO: Use Shared Folders explains a little bit more on how to use shared folders in Linux guest OS.

Setting up Linux

Customizing shell

  1. .bashrc is a login file that the Bash shell reads whenever it starts. This is where you can put configurations (environment variables, aliases, path amendments, etc.) to customize your Terminal experience. An example of a .bashrc file is included here. As you become more comfortable with Linux/Unix, you may find you modify this file frequently.
  2. The file should be located in your home directory (~/), and you need to create it if it does not yet exist. Open and save the file with the editor of your choice. You will not need the sudo command since this file is now in your home directory and you have write permission. If you use vi, just type vi ~/.bashrc at the prompt.
  3. Use the copy and paste commands to put the following lines into your .bashrc file.
    # Amend environment variables
    export PATH="$PATH:."
    # Set prompt
    PS1='\[\e]2;\w\a\]\u@\h:\[\e[1m\]\w\[\e[0m\]\$ '
    # User Specific Aliases
    umask 077
    export CLICOLOR=1
    alias edm='emacs --daemon'
    alias ecl='emacsclient --create-frame'
    alias enw='emacsclient -nw'
    alias grep='grep --color'
    alias less='less -Ri'
    alias la='ls -ahlF'
    alias lh='ls -hlF'
    alias rbak='\rsync -rltDzP --delete --delete-excluded --force'
    alias rsync='rsync -azP'
    alias ssh='ssh -Y'
    if [ $os_name == 'Darwin' ]; then
      alias top='top -ocpu -Otime -F -R'
    elif [ $os_name == 'FreeBSD' ]; then
      alias vi='vim -c "syntax on"';
      alias qm='qstat -u $USER'
      alias qI='qsub -I -X -V -l nodes=1:ppn=8,walltime=1:00:00 -q devel'
      alias sm='showq -u $USER'
      alias sq='squeue -o "%.18i %.9P %.30j %.8u %.2t %.10M %.6D %R"'
      alias sqm='sq -u $USER'
    alias itasca='ssh'

    The first section is where modifications to the environment variables go. An example of this is addition to the PATH used to search for executable commands. In this example, the . adds the working directory to the existing path ($PATH), so any executables you create in whatever directory you are working in can be referenced directly.

    The next section is the specification for the style of the prompt. This prompt is set to display username@hostname:/working/directory$ in bold font. It also sends the same information into the title bar of the terminal window (either Terminal or X-terminal). This is default behavior on Unix machines, but needs to be specifically configured on Mac machines. Writing information about the hostname and working directory into the title bar helps keep track of multiple windows logged in to multiple machines. For more information on setting prompts in the bash shell, search online for the bash prompt HOWTO.

    The last section contains user-specified command shortcuts. The long command, shown in quotes, is linked to whatever series of key strokes (preceding the equal sign) you specify in this file. This example includes two list commands, lh, and la. The hlF flag list files with information about their size and when they were created. The a option lists hidden files (all files that start with .). An aliased SSH connection is created to the MSI machine Itasca. When you paste these lines into your own file, be sure to change ‘username’ to your specific username. You can add your own aliases as they become more useful to you over the course of your research.

  4. Save the file and exit (:wq or :x for vi).
  5. Tell the computer to begin reading your new ~/.bashrc settings
    source ~/.bashrc

Setting up Mac OS X

Installing Xcode Developer Tools and its Command Line Tools

  1. Xcode can be found from your Mac OS X installation media, the Mac App Store, or Apple Developer Connection site.
  2. For OS X 10.7 or later, Command Line Tools needs to be installed separately. Download and install it either from Apple Developer Connection site, or from XCode, choose Preferences -> Downloads and then click “install” next to the “Command Line Tools”.

Installing X-Window system

X11 is a software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for Linux/Unix applications. Normally, it will run in the background and, with the exception of an icon on your dock, will be unnoticeable.

  1. Since OS X 10.8, is not shipped with the OS, and need to be installed from XQuartz or using Homebrew (see below). NOTE: You should NOT set the DISPLAY variable in any system configuration files or the X11 forwarding will be disabled.
  2. The default behavior for X11 applications is to open a xterm terminal at start-up. With the superior features of the Terminal applications, you will be unlikely to have need of this xterm terminal. To disable it, open a Terminal window and enter the following:
    defaults write org.x.X11 app_to_run /usr/bin/true
  3. In the Preferences menu, under the Input tab, make sure “Enable key equivalents under X11” is unchecked. This is going to make any meta-key functions in Emacs work properly. However, the drawback is that none of the Apple functions (such as apple-S for save, or apple-Q to close) will work in your X-window programs. It is a trade-off — either Emacs works “normally” or the apple functions work “normally” in X-windows. My preference is to have Emacs working since this is a commonly used program, while the apple functions are really just keyboard shortcuts that can all be done a different way.

Package management systems

There exist a few package management systems that port free/open source Unix/Linux software to Mac OS X, including Homebrew, MacPorts, Fink, rudix, stow. Homebrew, MacPorts, Fink are the more popular choices. Use Homebrew if you haven’t yet settled on one. Fink builds on Debian’s packaging tools and apt-get, and has pre-compiled executables. MacPorts is strongly influenced by BSD’s ‘ports’ system and builds everything from source code. Homebrew makes maximum use of what comes with OS X and does not need root/sudo to install packages. In addition, Homebrew Cask extends support to other OS X applications (such as Chrome) and let you skip the step-by-step installation process. MacPorts and Homebrew are more up to date than Fink, but Fink tends to have more scientific applications.

List of useful packages

Some of the packages (git, numpy/scipy/matplotlib) below come with the Command Line Tools of Xcode, but they are usually not the latest versions.

  • CMake - a cross-platform build system; easier to use than Autotools or writing Makefiles, and will be used as the default build system for the group code
  • GIMP - an open-source program similar to Adobe Photoshop
  • Git - a distributed version control system used to manage code development in our group; you must learn to use it, see Git_Guidelines
  • gnuplot - another plotting program
  • Inkscape - an open-source SVG vector graphics editor
  • MacTeX or TeX Live - the TeX typesetting program, includes latex
  • Sublime Text - A text editor for code, markup and prose

Some people may find the following also useful:

  • Emacs or XEmacs - a customizable text editor
  • Hunspell - a spell-checker program.
  • ImageMagick - a software suite to create, edit, compose, or convert bitmap images
  • NumPy/SciPy/matplotlib/pandas - Python extensions that define numerical array and matrix type and associated basic operations and support advanced math, signal processing, optimization, statistics, plotting and much more
  • Octave - a Matlab-like environment for numerical computation and graphics
  • R - a statistical computing and graphics environment
  • Xmgrace - a program for making scientific graphs

Using Homebrew

  1. Read more on Homebrew’s home page, but basically installation is a one-liner
    ruby -e "$(curl -fsSL"
  2. To install new packages, simply type “brew install packagename(s)” at the command prompt. To search for a package, use “brew search packagename”.
  3. If you haven’t, you can install X11 through Homebrew Cask with:
    brew install cask
    brew cask install mactex xquartz
    brew cask install firefox google-chrome skype sublime-text vlc # choose as you see fit
  4. To install everything at once, use the following command. Keep in mind this will take a long time to complete and will accept the default option for all questions during the installation (remove the “-y” switch to answer each of them yourself).
    brew tap homebrew/x11
    brew tap homebrew/science
    brew install cmake gimp git gnuplot inkscape

Using MacPorts

  1. MacPorts has a “pkg” installer and the instructions for installation are straightforward.
  2. To install new packages, simply type “sudo port install packagename(s)” at the command prompt. To search for a package, use “port search packagename”. The full list of available packages is found on the MacPorts website on the Available Ports page.
  3. To install everything at once, use the following command. Keep in mind this will take a long time to complete and will accept the default option for all questions during the installation.
sudo port install cmake emacs gimp git gnuplot grace hunspell imagemagick inkscape octave py-matplotlib py-numpy py-scipy R texlive

Using Fink

  1. Fink used to also have a binary installer, but as of 03/25/2008 it is not available for Leopard or later OS X versions. You will need to follow the instructions on the Download page on Fink website.
  2. To install new packages, simply type “fink install packagename(s)” at the command prompt. To search for a package, use “fink apropos packagename”. The full list of available packages is found on the Fink Project website on the Packages page.
  3. To install everything at once, use the following command. Keep in mind this will take a long time to complete and will accept the default option for all questions during the installation (remove the “-y” switch to answer each of them yourself).
fink -y install cmake emacs24 gimp2 git gnuplot grace hunspell imagemagick inkscape octave matplotlib-py27 scipy-py27 numpy-py27 r-base texlive
Password: (enter your password)

Visualization software

Install Schrodinger, Jmol, and VMD.

Set up ssh-agent

  • Generate a key pair following the first bullet point under SSH-Agent , and skip the rest.
  • Follow “On MacOS” in GPG-Agent.
  • Add the lines under “Useful common settings”.

Setting up .ssh/config

Follow the steps in Customizing .ssh/config and set up Sharing sessions over a single connection and Host alias. Leave Multi-hop for later when you need it but do keep in mind of this option.

Customizing shell

Follow this part in Setting up Linux.

Customizing Terminal

  1. Open a Terminal window: find the Terminal icon again and double-click to open it. You can add this icon to your dock if you want (I recommend this since you will probably use Terminal more than any other application) by clicking and dragging the icon onto the dock.
  2. Once you have a Terminal window open, go to Terminal -> Preferences -> Settings. On the left is a series of included styles for the Terminal window. These are similar to PowerPoint templates. If you like one of them, simply highlight it and click on Default at the bottom of the list. This will make that style your default Terminal style.
  3. If you are interested in fuller customization, there are several things you can change. Explore the tabs in the Settings menu to see what can be modified. You can change font style and size, text color, bold text color, highlight color, transparency, background color, etc.
  4. To make your terminal function more like a Unix terminal with familiar key functions, I recommend changing the following settings:
    • Shell: where it says: “When the shell exits”, select “Close only if the shell exited cleanly”.
    • Keyboard: Change the following key bindings
    • End: highlight and click on “edit.” Modifier: none. Action: “send string to shell.” Key sequence: type option-ctrl-[ together, then release and type [ followed by F. If you did this successfully, you will see \033[F on the line.
    • Home: highlight and click on “edit.” Modifier: none. Action: “send string to shell.” Key sequence: option-ctrl-[ then [ and H
    • Page down: highlight and click on “edit.” Modifier: none. Action: “send string to shell.” Key sequence: option-ctrl-[ then [ 6 ~
    • Page up: highlight and click on “edit.” Again, the modifier is “none” and the action is “send string to shell.” The key sequence is option-ctrl-[ then [ 5 ~.
    • Shift end: highlight and click on “edit.” Modifier: shift. Action: “scroll to end of buffer.”
    • Shift home: highlight and click on “edit.” Modifier: shift. Action: “scroll to start of buffer.”
    • Shift page down: highlight and click on “edit.” Modifier: shift. Action: “scroll to next page in buffer.”
    • Shift page up: highlight and click on “edit.” Modifier: shift. Action: “scroll to previous page in buffer.”
  5. The various color schemes also serve a purpose. You can create a library of different backgrounds corresponding to different work locations. Most of the work done in our lab involves remote connections to several computers, often multiple connections at the same time. I find the different backgrounds and colors, along with adding fun and interest to my screen, helps me keep straight which computer I am connected to at any given time. You can modify any of the existing color schemes or create your own.
    • In the Shell tab, check the box next to ‘Run command:’ and add the name of an aliased SSH command in the text box. Any time this particular Terminal profile is opened, it will automatically connect to the computer specified in the SSH command. The result will be a differently colored window open with a remote connection to a specific machine.
    • To open these customized windows, select New Window from the Terminal menu bar. A drop down list of the profiles you created will appear. Selecting any of them will open the new window with appropriate connection.

Setting up Shared Mac

This document walks through the basic set up procedures for configuring the Mac Pro in a computational research environment. If this is a new machine, start with Account Initiation. If this is an existing machine and you only need to create a new account for yourself, begin with Create user account and Modify /etc/sudoers in Account Initiation and then follow Customization.

Account initiation

NOTE: This must be done by the computer administrator.

Setting a root account password

  1. Choose System Preferences in the Apple menu and click on Accounts.
  2. There will only be one account, the one created by the Electronics Shop. This account is your root account and has all the administrative privileges. You need to change the password from the default given by the Electronics Shop to something else.
  3. Check the box next to ‘Allow user to administer this computer’.

Creating user account

  1. While in the Accounts menu of System Preferences, click on the + button in the bottom left corner and fill in the form with appropriate information. (You may need to first click the lock and enter the credentials to enable changes.) Keep in mind that once the “Short Name” is set, it cannot be changed. This will become the name of your home directory and the only way to change it is to create a new user account.
  2. Choose a picture by clicking on the picture icon if you like.
  3. Make sure the account type is ‘Standard’. If the “Allow user to administer this computer” box is checked,‘ uncheck‘ it. This is a simple safety system that will force you to enter the root password for any major modifications and prevent you from making any serious mistakes inadvertently.
  4. Select Login Options at the bottom of the list of users. Disable the automatic login, and select the checkbox “Enable fast user switching.” Fast user switching is a way to allow multiple users to log in at the same time and switch between desktops.

Modifying /etc/sudoers

  1. The file /etc/sudoers is a file that contains the usernames of users with advanced permissions. You want to add your name to the file to allow yourself administrative privileges for some of the linux-side applications.
  2. Open a Terminal window. The Terminal launch icon can be found by opening the Finder (the “face” icon in the task bar at the bottom of the desktop), clicking on the Applications menu in the side-bar, and scrolling down to the Utilities folder where Terminal is located.
  3. A new window will open with a command prompt. At the prompt, type the following commands:
sudo visudo
Password: (enter the '''root''' password)

This will open a file and change the screen to look similar to the one shown below. To modify this file, move the cursor to the bottom of the ‘User privilege specification’ section (use arrow keys). Next type shift-A. This will take you to the end of the last line, and change to insert mode (the word insert should appear at the bottom of the page). Then enter a carriage return to begin a new line and type username ALL==(ALL) ALL as in the example file shown below. To exit, press the escape key, and then type :wq! . MacOSX_setup_etc_sudoers.png

Logging in to your account

  1. With Fast User Switching enabled, you should be able to click on the user name displayed in the upper right hand corner of the desktop.
  2. In the dropdown list, find your account name and click to log in.

Performance and security settings

Modifying system preferences

  1. In the Apple menu, choose System Preferences.
  2. Configure processor performance and “sleep” functions:
    • Click on Energy Saver.
    • Slide the bar to “Never” for “Computer sleep”.
    • Move the slider to your desired setting for “Display sleep”.
    • Check ‘Wake for Ethernet network access’ and ‘Start up automatically after a power failure’.
    • Return to the main System Preferences menu by clicking on Show All in the top menu bar.
  3. Configure printers
  4. Allow remote SSH access:
    • Click on Sharing.
    • Check the box next to Remote Login.
    • Return again to the main System Preferences menu.
  5. Configure the firewall:
    • Click on Security.
    • Under the ‘General’ tab, check the box for ‘Require password after sleep or screen saver begins’, ‘Disable automatic login’, and ‘Use secure virtual memory’.
    • Under the ‘Firewall’ tab, select ‘Set access for specific services and applications.’ The programs displayed in the list should reflect the choices made in the Sharing system preference (i.e., remote login (SSH) and possibly Apple Remote Desktop)
    • Return once again to the main System Preferences menu.

Modifying /etc/profile

  1. The file /etc/profile is a configuration file that the computer reads at start up. In this file, you can specify environment variables that will be applied system-wide to all users. The changes made here include the PATH and TERM variables. The PATH environment variable tells your computer where to look for command files (executables). Those included in this file are the default directories where commands are typically found. TERM is the environment variable that tells your computer what type of terminal you want. xterm-color allows you to have terminals with colored directories. Specifying this variable here forces Terminal and X-terminal to have the same type. If the variable is left unspecified, Terminal is by default xterm-color, but X-terminal is just xterm.
  2. Still in the terminal window, type the following command:
    sudo vi /etc/profile
    Password: (enter your password)
  3. Modify the file to look similar to the one shown below. To insert text using the vi editor, type the letter i. This will switch vi into insert mode and the word insert should appear at the bottom of the terminal window. You can type normally in insert mode. The added lines include two environment variables that are read by the computer at start-up.
  4. When the file is updated, you want to apply it to the current terminal session:
    source /etc/profile


Configuring the tcp wrapper

  1. The tcp wrapper is a security feature that allows you to specify exactly who can have access to your computer via SSH. Once configured, it will block all incoming IP addresses except for the ones you specify.
  2. Once again using your favorite editor, create a new file called /etc/hosts.allow. Remember that this file is in the root account, so you will need the sudo command to gain access. (For example, type sudo vi /etc/hosts.allow)
  3. In the blank file, copy and paste the following lines:
    ALL : : allow
    ALL : : allow
    ALL : : allow
    ALL : your.home.ip.address : allow
    ALL : ALL : deny
  4. This file works in the following manner. The ALL refers to the services that you are granting access to. With a Linux machine, you could be even more specific and replace ALL with SSHD. This would only allow SSH access from the machines with the designated IP addresses (or host names associated with each IP address). On the Mac, the SSH program was not compiled with the same tcp configuration, and using SSHD in place of ALL does not have the desired effect. However, remember that when you turned on your firewall, you blocked every service but SSH, so on the Mac, using ALL is effectively the same as using SSHD. The next part of the line is either a computer’s hostname or the computer’s IP address, or part of an IP address. After the hostname comes the allow/deny statement that tells the computer whether to allow access or deny it. Allowed IP addresses under this configuration are, in order of checking, addresses coming from the chemistry department at the University of Minnesota, or those coming from a Minnesota Supercomputing Institute supercomputer (, or those from the chemical engineering and materials science department, or your home IP address. Anything that fails to match one of those specifications is denied access.

Enabling X11 forwarding by SSH

  1. This behavior is default on most linux boxes and will be required for remote logins. Setting this forwarding feature allows anyone with an account on your machine to use the X11 programs when they connect remotely.
  2. Open the file /etc/sshd_config:
    sudo vi /etc/sshd_config
    Password: (enter your password)
  3. Once the file opens, type /X11 so vi will search and find the line:
    X11Forwarding no

    With the cursor on that line, type i to move into insert mode, and comment out the line

    #X11Forwarding no

    or modify it to allow X11 forwarding:

    X11Forwarding yes

Plotting Data


Creating the default settings for new graphs

Open xmgrace, make the desired changes to the various settings, save the file as ~/.grace/templates/Default.agr. A template appropriate for publication can be downloaded here. Note that in order to change the actual page dimensions, you need to go to View -> Page setup, and in the Page section, change Size to “Custom” and the Dimensions unit to be “in”, put 3.25 x 3.25 (requirement for half-page figure) or 7 x 7 (for full-page figure). This will fix the size on the output devices (X11, PDF, PNG, or actual printers). The resolution on the computer screen (X11) is set in the Resolution (dpi) text box, and the resolutions for other types of devices are set in File -> Print setup -> Resolution (dpi). Unfortunately, the resolution setting and the default print device are not saved in the template file, but see below.

Setting the default printer to print to .png files with 300dpi

Create the file ~/.grace/gracerc.user and enter the following text:

DEVICE "X11" DPI 192

Special symbols

Italic, bold

italic: \1italic
bold: \2bold
italic and bold: \3xyz

Subscript, superscript

subscript: 3\s10\N
x-squared: x\S2\N

\N returns the font position to normal.

Special symbols

Angstrom symbol: \cE\C

For other characters, look at this list: ascii table with low and high characters. Just use the character from the left column between \c and \C to produce the one from the right column. I highlighted the most interesting characters (for a scientist).

Greek letters

theta: \xq\0
rho: \xr\0

\0 returns the font type to normal.

The \c, \C, and \x options are listed as deprecated in the xmgrace manual. The new method to insert special characters in xmgrace is:

  • Press ctrl-e while positioned in a text-edit field to bring up the font dialog box.
  • Select the desired font from the drop-down list. You probably want to use Symbol because it contains many of the commonly used special characters.
  • Click on the character you want to insert

This will put something like


into the text box.


Ternary diagram

You basically need to calculate the (x,y) coordinates for all your data points as well as the three vertices of the triangle. This template file will produce something like this