Note: Before applying these changes, should have Avamar admin knowledge atleast a year or more.
Modifying the "repl_cron"
Script
1) As user root on the replication source utility node, copy
the /usr/local/avamar/bin/repl_cron script to a different, preferably
meaningful, name.
% cd /usr/local/avamar/bin
% cp -p
repl_cron repl2_cron
2) As user admin on the replication source utility node, copy
the /usr/local/avamar/etc/repl_cron.cfg file to a different name, preferably a
name that is consistent with the name used for the repl_cron program. NOTE: Do not modify the original repl_cron.cfg
file in the /usr/local/avamar/bin directory.
Copy the in-use repl_cron.cfg in the /usr/local/avamar/etc directory.
% cd /usr/local/avamar/etc
% cp -p
repl_cron.cfg repl2_cron.cfg
3) As user root, edit the /usr/local/avamar/bin/repl2_cron
for modification:
The portion of code that is of interest on the original
repl_cron looks like:
--BEGIN--
sub
init {
dpn::add_legal("timeout:d",
"configfile:s");
dpncron::init("repl_cron",
"replicate", 4600, "--infomsgs --verbose");
$configfile =
"$dpn::avamardir/etc/repl_cron.cfg";
$configfile = $dpn::flags{configfile} if
$dpn::flags{configfile};
dpncron::tprint("repl_cron [$version]:
configfile = '$configfile'\n") if $dpn::flags{verbose};
}
--END--
Using the repl2_cron examples as provided earlier modify
the code to look like:
--BEGIN--
sub
init {
dpn::add_legal("timeout:d",
"configfile:s");
dpncron::init("repl2_cron", "replicate2",
4600, "--infomsgs --verbose");
$configfile = "$dpn::avamardir/etc/repl2_cron.cfg";
$configfile = $dpn::flags{configfile} if
$dpn::flags{configfile};
dpncron::tprint("repl2_cron [$version]: configfile =
'$configfile'\n") if $dpn::flags{verbose};
}
--END--
Notes about the modifications:
-
The new variable "repl2_cron" is the
name of the program that is to be called.
In this case the program being called is /usr/local/avamar/bin/repl2_cron
-
The new variable "replicate2" is base
name of the replicate log. In this case
the replicate log for repl2_cron will be in
/usr/local/avamar/var/cron/replicate2.log
In the /usr/local/avamar/bin/repl2_cron file, search for all
other instances of repl_cron and replace it with repl2_cron. This will modify log print outs and provide
consistency when making future modifications, etc.
1) In the original repl_cron.cfg and the newly created repl2_cron.cfg
files use the appropriate flags such as: --dstaddr, --dstid, --dstpassword, --exclude,
and --include patterns to exclude certain clients or groups. At this point it should be just as as you'd
expect when making any modifications to replication jobs. The server names should be specified
correctly etc. You can refer to the
replication documentation for additional flags etc. NOTE: You can not use the EMS
or the MCS to edit the repl2_cron.cfg file.
2) If you launch concurrent replication sessions to the
same target grid, you need to avoid overlapping clients between the two
configuration files. The easiest way to
do this is to use the "--include" flag in the repl2_cron.cfg file to
limit the replication to ONLY the "special clients" and use the
"--exclude" flag in the standard repl_cron.cfg file to exclude these
"special clients".
NOTE: If both replication sessions attempt to replicate
the backups for the same client, the replication session that started first
will lock the hash cache file, and will cause the second replication session to
generate an error on that client. The
replicator will terminate after a set number of errors.
1) As user root on the utility node, to list the existing
cron:
% crontab -u
dpn -l
The existing crontab of an Avamar server that performs
replication looks something similar to:
--BEGIN--
#
<<< BEGIN AXION ADMINISTRATOR MANAGED ENTRIES -- DO NOT MANUALLY
MODIFY >>>
0 10
* * * /usr/local/avamar/bin/cron_env_wrapper morning_cron_run
0 18
* * * /usr/local/avamar/bin/cron_env_wrapper evening_cron_run
0 0
* * * /usr/local/avamar/bin/cron_env_wrapper /usr/local/avamar/lib//mcs_ssh_add
repl_cron
#
<<< END AXION ADMINISTRATOR MANAGED ENTRIES >>>
--END--
2) As user root or dpn on the utility node, modify the
crontab to add the additional replication.
To modify the dpn crontab as user root, use:
% crontab -u
dpn -e
After the modifications, the crontab should look like:
--BEGIN--
0 1 * * *
/usr/local/avamar/bin/cron_env_wrapper /usr/local/avamar/lib/mcs_ssh_add repl2_cron
#
<<< BEGIN AXION ADMINISTRATOR MANAGED ENTRIES -- DO NOT MANUALLY
MODIFY >>>
0 10
* * * /usr/local/avamar/bin/cron_env_wrapper morning_cron_run
0 18
* * * /usr/local/avamar/bin/cron_env_wrapper evening_cron_run
0 0
* * * /usr/local/avamar/bin/cron_env_wrapper /usr/local/avamar/lib//mcs_ssh_add
repl_cron
#
<<< END AXION ADMINISTRATOR MANAGED ENTRIES >>>
--END—
NOTE: The MCS manages all of the entries within the
"Avamar Administrator Managed Entries" section. Since the MCS is only able to control _one_
replication instance, the additional replication job(s) must go ABOVE the
"# <<< BEGIN..." line.
NOTE: Due to bug 14879, when the MCS restarts, the MCS will
not only over-write any entries within the "Avamar Administrator Managed
Entries", but will also delete any entries BELOW this section. Therefore, until we resolve this bug, be sure
to add the additional crontab entry ABOVE the "Avamar Administrator
Managed Entries" block.
Allowing the new Replication process to run:
1.)
As the root user we’ll need to add the new
replication job to dpncron.pm
a.
cd /usr/local/avamar/bin/
b.
open dpncron.pm with your favorite editor
2.)
Look for the following section:
--BEGIN--
my %exclude_list =
(
'cp_cron' => [ 'cp_cron' ],
'gc_cron' => [ 'gc_cron' ],
'hfscheck_cron' => [ 'hfscheck_cron', 'hfscheck_kill' ],
'hfscheck_kill' => [ 'hfscheck_kill' ],
'repl_cron' => [ 'repl_cron' ],
'metadata_cron' => [ 'metadata_cron' ]
);
--END--
3.) Copy
the repl_cron line and paste it onto the next line, then change repl_cron to
the name of the new replication job (above we used repl2_cron)
--BEGIN--
my %exclude_list =
(
'cp_cron' => [ 'cp_cron' ],
'gc_cron' => [ 'gc_cron' ],
'hfscheck_cron' => [ 'hfscheck_cron', 'hfscheck_kill' ],
'hfscheck_kill' => [ 'hfscheck_kill' ],
'repl_cron' => [ 'repl_cron' ],
'repl2_cron' =>
[ 'repl2_cron' ],
'metadata_cron' => [ 'metadata_cron' ]
);
--END--
This will
allow the process called repl2_cron to be carried out completely.
IMPORTANT: When we've multiple concurrent replication
sessions, we have always started the two replication sessions at least one hour
apart. We don't know if this is really
required, but given that every replication session always starts with a series
of avmgr commands that modify the GSAN accounts on the replication target,
we've always been concerned about have two different replication sessions
simultaneously attempting to modify the accounting information on the replication
target.