Search Avamar backups for a file or folder

Do you get requests to restore a file or folder from Avamar only to have the user have no idea where it was actually located? Do you not have Data Protection Search?

Well, here is a fairly easy way to get the location you need from Avamar CLI.

Log in as admin through your favorite SSH client and run

avtar --list --account=path_of_client --after="YYYY-MM-DD hh:mm:ss" --before="YYYY-MM-DD hh:mm:ss" --verbose | grep file_name

where

path_of_client is the Avamar domain location of the system /clients/servername

file_name is a file or folder name you want to find.

Avamar password recovery

I recently had an issue where, on a new Avamar install, the customer mis-typed the password twice. We had selected the option to make all passwords the same, so we were unable to log into the system at all. We booted the Utility node into Single User Mode by following the instructions here.

After we reset the OS account password we were able to SSH into the Avamar, but we were unable to successfully change all of the passwords. When prompted for the Avamar root account password in the change-passwords script the password we tried didn't work, it was asking us for the incorrectly typed password that set this whole thing in motion.

After SSHing in as admin, we were able to su to root. Then we decrypted the MC Database Tables and this gives us a handful of Avamar account passwords. Because we has chosen to make all passwords the same during the workflow install, we had our mistyped password. Re-encrypt the DB tables and rerun the change-passwords script and everything is ok!


admin@avamar:~/>: su
Password:
root@avamar:/home/admin/#: grep AP /usr/local/avamar/var/mc/server_data/prefs/mcserver.xml
                <entry key="backuprestoreAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="backuponlyAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="rootAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="MCUSERAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="restoreonlyAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="viewuserAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
root@avamar:/home/admin/#: mccipher decrypt --all

**********************************************************************************
* EMC Avamar Management Console (MC).                                            *
* MC Security Tool for Secret Key generation, encryption, decryption and digest. *
**********************************************************************************

WARNING: MCS Preference looks like already decrypted: mcserver:/usr/local/avamar/lib:com/avamar/asn/rmi_ssl_keystore_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/lib:com/avamar/mc/cac/ldap_login_ap
WARNING: MCS Preference looks like already decrypted: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/MCUSERAP
WARNING: MCS Preference looks like already decrypted: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/backuponlyAP
WARNING: MCS Preference looks like already decrypted: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/backuprestoreAP
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/mcserver_keystore_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/mcserver_truststore_ap
WARNING: MCS Preference looks like already decrypted: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/restoreonlyAP
WARNING: MCS Preference looks like already decrypted: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/rootAP
WARNING: MCS Preference looks like already decrypted: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/viewuserAP
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/var/mc/server_data/prefs:com/avamar/mc/cac/ldap_login_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/var/mc/server_data/prefs:com/avamar/mc/dpn/users/mcserver_keystore_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/var/mc/server_data/prefs:com/avamar/mc/dpn/users/mcserver_truststore_ap
WARNING: Can not decrypt some or all MCS Preferences.

All MC DB Tables have been decrypted successfully.

All MCCLI config files have been decrypted successfully.

See MCCipher log for details.

? 2012 EMC Corporation. All rights reserved.
root@avamar:/home/admin/#: grep AP /usr/local/avamar/var/mc/server_data/prefs/mcserver.xml
                <entry key="backuprestoreAP" value="backuprestore1" />
                <entry key="backuponlyAP" value="backuponly1" />
                <entry key="rootAP" value="RootPassword!" />
                <entry key="MCUSERAP" value="MCUser1" />
                <entry key="restoreonlyAP" value="restoreonly1" />
                <entry key="viewuserAP" value="viewuser1" />
root@avamar:/home/admin/#: mccipher encrypt --all

**********************************************************************************
* EMC Avamar Management Console (MC).                                            *
* MC Security Tool for Secret Key generation, encryption, decryption and digest. *
**********************************************************************************

WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/lib:com/avamar/mc/cac/ldap_login_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/mcserver_keystore_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/lib:com/avamar/mc/dpn/users/mcserver_truststore_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/var/mc/server_data/prefs:com/avamar/mc/cac/ldap_login_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/var/mc/server_data/prefs:com/avamar/mc/dpn/users/mcserver_keystore_ap
WARNING: Password is NULL or EMPTY: mcserver:/usr/local/avamar/var/mc/server_data/prefs:com/avamar/mc/dpn/users/mcserver_truststore_ap
WARNING: Can not encrypt some or all MCS Preferences.

All MC DB Tables have been encrypted successfully.

All MCCLI config files have been encrypted successfully.

See MCCipher log for details.

? 2012 EMC Corporation. All rights reserved.
root@avamar:/home/admin/#: grep AP /usr/local/avamar/var/mc/server_data/prefs/mcserver.xml
                <entry key="backuprestoreAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="backuponlyAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="rootAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="MCUSERAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="restoreonlyAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
                <entry key="viewuserAP" value="{AES}9qRz5orPek4Dq7ybDzh/MA==" />
root@avamar:/home/admin/#:

Avamar - MCS service not starting

After powering on an Avamar, the MCS service will not start.


admin@Avamar:~/>: dpnctl status
dpnctl: INFO: gsan status: up
dpnctl: INFO: MCS status: down.
dpnctl: INFO: emt status: up.
dpnctl: INFO: Backup scheduler status: down.
dpnctl: INFO: Maintenance windows scheduler status: enabled.
dpnctl: INFO: Unattended startup status: disabled.
dpnctl: INFO: avinstaller status: up.
dpnctl: INFO: ConnectEMC status: up.
dpnctl: INFO: ddrmaint-service status: up.
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

Trying to start the MCS service normally generates an error, and the service will not start.


admin@Avamar:~/>: dpnctl start mcs
dpnctl: INFO: Starting MCS...
dpnctl: INFO: To monitor progress, run in another window: tail -f /tmp/dpnctl-mcs-start-output-26146
dpnctl: WARNING: 1 warning seen in output of "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mcserver.sh --start"

admin@Avamar:~/>:

admin@Avamar:~/>: dpnctl status
dpnctl: INFO: gsan status: up
dpnctl: INFO: MCS status: down.
dpnctl: INFO: emt status: up.
dpnctl: INFO: Backup scheduler status: down.
dpnctl: INFO: Maintenance windows scheduler status: enabled.
dpnctl: INFO: Unattended startup status: disabled.
dpnctl: INFO: avinstaller status: up.
dpnctl: INFO: ConnectEMC status: up.
dpnctl: INFO: ddrmaint-service status: up.
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

Force a MCS restore with:

admin@Avamar:~/>: dpnctl start --force_mcs_restore

  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
Action: starting all
Have you contacted Avamar Technical Support to ensure that this
  is the right thing to do?

Answering y(es) proceeds with starting all;
          n(o) or q(uit) exits

y(es), n(o), q(uit/exit): y
dpnctl: INFO: gsan is already running.
dpnctl: INFO: Restoring MCS data...
dpnctl: INFO: MCS data restored.
dpnctl: INFO: Starting MCS...
dpnctl: INFO: To monitor progress, run in another window: tail -f /tmp/dpnctl-mcs-start-output-26146
dpnctl: WARNING: 1 warning seen in output of "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mcserver.sh --start"
dpnctl: INFO: MCS started.
dpnctl: INFO: EM Tomcat is already running, not attempting to restart it
dpnctl: INFO: Resuming backup scheduler...
dpnctl: INFO: Backup scheduler resumed.
dpnctl: INFO: AvInstaller is already running.
admin@Avamar:~/>:
admin@Avamar:~/>: dpnctl status
dpnctl: INFO: gsan status: up
dpnctl: INFO: MCS status: up.
dpnctl: INFO: emt status: up.
dpnctl: INFO: Backup scheduler status: up.
dpnctl: INFO: Maintenance windows scheduler status: enabled.
dpnctl: INFO: Unattended startup status: disabled.
dpnctl: INFO: avinstaller status: up.
dpnctl: INFO: ConnectEMC status: up.
dpnctl: INFO: ddrmaint-service status: up.

Verify the Avamar is now online and good to  go.

admin@Avamar:~/>: status.dpn
Wed Mar 21 13:03:48 PDT 2018  [AVAMAR.XIOLOGIX.LOCAL] Wed Mar 21 20:03:48 2018 UTC (Initialized Fri Jun 23 19:03:09 2017 UTC)
Node   IP Address     Version   State   Runlevel  Srvr+Root+User Dis Suspend Load UsedMB Errlen  %Full   Percent Full and Stripe Status by Disk
0.0    10.10.10.100   7.4.1-58  ONLINE fullaccess mhpu+0hpu+0hpu   1 false   0.88 9545  2606806   0.1%   0%(onl:40 )  0%(onl:36 )
Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable

System ID: xxxxxxxxxx@00:oo:00:oo:00:ee

All reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0hpu)
System-Status: ok
Access-Status: full

No checkpoint yet
No GC yet
No hfscheck yet

Maintenance windows scheduler capacity profile is active.
  The maintenance window is currently running.
  Currently running task(s): script
  Next backup window start time: Wed Mar 21 20:00:00 2018 PDT
  Next maintenance window start time: Thu Mar 22 08:00:00 2018 PDT

If the Scheduler is not started, restart the scheduler service

Maintenance windows scheduler capacity profile is active.
  WARNING: Scheduler is WAITING TO START until Thu Mar 22 08:00:00 2018 PDT.
  Next backup window start time: Thu Mar 22 20:00:00 2018 PDT
  Next maintenance window start time: Thu Mar 22 08:00:00 2018 PDT
admin@NC-Avamar:~/>: dpnctl stop maint
dpnctl: INFO: Suspending maintenance windows scheduler...
admin@NC-Avamar:~/>: dpnctl start maint
dpnctl: INFO: Resuming maintenance windows scheduler...
dpnctl: INFO: maintenance windows scheduler resumed.

Dell EMC Data Protection patches for Spectre/Meltdown

Dell/EMC has not released any patches for the DPS products yet, but I bet they will be coming soon. After all, Avamar and Data Domain are running on Dell server hardware these days and running Intel processors.

Here is the official product matrix for Dell EMC products, including Avamar, Networker, and some Data Domain products (notably absent is Data Domain physical hardware). The link requires a Dell EMC Support login.

https://emcservice.force.com/CustomersPartners/kA6f1000000FD0gCAG


The important parts are:



 

 

Making free space on a Full Data Domain

Have a Data Domain that's full? Need to free up space so your backups continue to run? Check for old snapshots, expire them, then free up space with a file system cleaning.

Had a customer who's replication Data Domain was significantly (20% more!) more utilized than the primary site, and also was completely full. Which was causing replications to fail. When I checked out the snapshots on one of the Mtrees, I saw that there were 10 snapshots from almost a year ago. These snapshots together were using around 16Tb of the available 67Tb of space on the Data Domain.

SSH into the Data Domain as sysadmin. Use the mtree list command to show the Data Domain Mtree structure.

sysadmin@DataDomain# mtree list
Name Pre-Comp (GiB) Status
---------------------------- -------------- ------
/data/col1/avamar-13896542221487697.5 RW
/data/col1/avamar-1477523534 561707.6 RW
/data/col1/backup 0.0 RW
---------------------------- -------------- ------

Checking the smaller of the Mtrees shows what we expect to see

sysadmin@DataDom1# snapshot list mtree /data/col1/avamar-1477523534
Snapshot Information for MTree: /data/col1/avamar-1477523534
----------------------------------------------
NamePre-Comp (GiB) Create Date Retain UntilStatus
----------------- -------------- ----------------- ----------------- -------
cp.20170622161010 637792.0 Jun 22 2017 09:11 Jun 28 2017 08:50 expired
cp.20170622164953 637792.0 Jun 22 2017 09:51 Jun 28 2017 08:51 expired
cp.20170628151023 656374.0 Jun 28 2017 08:11
cp.20170628155016 661756.7 Jun 28 2017 08:51
----------------- -------------- ----------------- ----------------- -------
 
Snapshot Summary
-------------------
Total:4
Not expired:2
Expired:2

However, when we take a look at the larger of the Mtrees, we see where the problem lies.

sysadmin@DataDom1# snapshot list mtree /data/col1/avamar-1389654222
Snapshot Information for MTree: /data/col1/avamar-1389654222
----------------------------------------------
NamePre-Comp (GiB) Create Date Retain Until Status
----------------- -------------- ----------------- ------------ ------
cp.201609101621301076444.1 Sep 10 2016 09:21
cp.201609111532331100654.4 Sep 11 2016 08:32
cp.201609131705211161823.8 Sep 13 2016 10:05
cp.201609171657211257219.9 Sep 17 2016 09:57
cp.201609181543091284386.1 Sep 18 2016 08:43
cp.201609191705561309786.6 Sep 19 2016 10:05
cp.201609201542051339382.0 Sep 20 2016 08:42
cp.201609211541431359362.4 Sep 21 2016 08:41
cp.201609221542311377744.1 Sep 22 2016 08:42
cp.201609281624221432382.5 Sep 28 2016 09:24
----------------- -------------- ----------------- ------------ ------
 
Snapshot Summary
-------------------
Total: 10
Not expired: 10
Expired:0

Ten snapshots from almost a year ago, taking up so much space is bad. So lets manually expire each snapshot.

sysadmin@DataDom1# snapshot expire cp.20160910162130 mtree /data/col1/avamar-1389654222
Snapshot "cp.20160910162130" for mtree "/data/col1/avamar-1389654222" will be retained until Jul3 2017 08:16.

Do this nine more times to mark all of the old snapshots for deletion.

Checking the snapshots now shows them all expired.

sysadmin@DataDom1# snapshot list mtree /data/col1/avamar-1389654222
Snapshot Information for MTree: /data/col1/avamar-1389654222
----------------------------------------------
NamePre-Comp (GiB) Create Date Retain UntilStatus
----------------- -------------- ----------------- ----------------- -------
cp.201609101621301076444.1 Sep 10 2016 09:21 Jul3 2017 08:16 expired
cp.201609111532331100654.4 Sep 11 2016 08:32 Jul3 2017 08:17 expired
cp.201609131705211161823.8 Sep 13 2016 10:05 Jul3 2017 08:17 expired
cp.201609171657211257219.9 Sep 17 2016 09:57 Jul3 2017 08:17 expired
cp.201609181543091284386.1 Sep 18 2016 08:43 Jul3 2017 08:18 expired
cp.201609191705561309786.6 Sep 19 2016 10:05 Jul3 2017 08:18 expired
cp.201609201542051339382.0 Sep 20 2016 08:42 Jul3 2017 08:18 expired
cp.201609211541431359362.4 Sep 21 2016 08:41 Jul3 2017 08:19 expired
cp.201609221542311377744.1 Sep 22 2016 08:42 Jul3 2017 08:19 expired
cp.201609281624221432382.5 Sep 28 2016 09:24 Jul3 2017 08:19 expired
----------------- -------------- ----------------- ----------------- -------
 
Snapshot Summary
-------------------
Total: 10
Not expired:0
Expired: 10

And now we manually kick off the file system cleaning.

sysadmin@DataDom1# filesys clean start nowait
Cleaning started.Use 'filesys clean watch' to monitor progress.

When the cleaning is finished we should be good to go. In this case we cleared the Avamar logs, kicked off an Avamar checkpoint, and resumed the scheduler.

Avamar - RabbitMQ errors

If you are seeing RabbitMQ errors in your Avamar logs, you'll want to correct the issue. The RabbitMQ error is often due to a mis-configuration of the BRM Service. If you're not using BRM, but the MCS is expecting to see/send data to BRM you can get errors in the logs for the RabbitMQ.

Check your mcserver.xml file for a line "enableBrmService". If set to "true" and you're not using BRM, then it should be set to "false".

admin@AVAMAR:/usr/local/avamar/var/mc/server_data/prefs/>: grep -i enableBRM mcserver.xml

You should see output similar to

<entry key="enableBrmService" value="true" />

If so, edit the mcserver.xml with vi, and change the entry to "false".

Stop, then start the MCS service

mcserver.sh --stop

mcserver.sh --start

Log back into the GUI and clear out the error logs.

You should not see the RabbitMQ errors any more.

 

Avamar: Root to root Migration

Moving from one Avamar to a new Avamar for whatever reason isn't particularly hard, but can present some challenges, especially since EMC support kicks you off to EMC Professional Services. There are guides floating around, and this is all based off of those guides.

First thing to do is create/add an exception to the GSAN port file. This will allow the unencrypted or encrypted traffic necessary to perform the root-to-root migration. Even if you can telnet on ports 27000 and 29000 from all nodes to all other nodes, you'll need to add this in.

On both the source (old) node(s) and the destination (new) node(s), edit the gsan-port file (or create it if it isn't there). Logon to the Utility node as admin, then -su to root. Load the dpnid keys with

ssh-agent bash
ssh-add ~admin/.ssh/admin_key

Create (or edit) the gsan-port file using VI

vi /usr/local/avamar/lib/admin/security/gsan-port

On the first line of the file add in the port exception (27000 for unencrypted and 29000 for encrypted)

GSAN_PLAIN_TEXT='27000,'

Save the file and exit VI.

If you're on a grid, copy the file to all nodes with

mapall --user=root copy /usr/local/avamar/lib/admin/security/gsan-port

Then, copy the file to the correct directory with

mapall --user=root mv /root/usr/local/avamar/lib/admin/security/gsan-port /usr/local/avamar/lib/admin/security/

Restart the Avamar firewall services

mapall --noerror --all+ --user=root 'service avfirewall restart'

If you're using a Data Domain as your backup data target, add the Data Domain to the destination Avamar.

Now we can get started with the actual migration.

On the source server -su back to admin and flush the MCServer

mcserver.sh --flush

On the destination server, stop the MCS service with

dpnctl stop mcs

On the source server, run the migration command, where DST is the destination Avamar IP address or hostname, DST-ROOT-PASSWORD is the destination root password, and SRC-ROOT-PASSWORD is the source root password.

Note, there are two ROOT users in Avamar, "Avamar Root" and "OS Root". This is the "Avamar Root" user credentials we're looking for here. You can verify the credentials by running the following command from both the source and destination Avamar nodes.

avmgr logn --id=<username> --ap=<password> --debug

Migration Command:

nohup avrepl --operation=replicate --[replscript]dstaddr=DST --[replscript]dstid=root --dstpassword=DST-ROOT-PASSWORD --[avtar]id=root --[replscript]fullcopy=true --ap=SRC-ROOT-PASSWORD --send-adhoc-request=false --max-streams=8 --[replscript]timeout=0

You can watch the progress of the replication by SSHing back into the source Avamar and running

tail -f /home/admin/nohup.out

Root-to-root migration can take anywhere from several hours to several days, depending on the amount of data to transfer.

Prior to cutover, you should run the migration command again to catch any new data that was put onto the source server after the replication command was first run.

When you're ready to cutover, SSH into the destination Avamar as admin and run

ssh-agent bash
ssh-add ~admin/.ssh/admin_key

To load the SSH key, then

mcserver.sh --restore --restoretype=new-system

Follow the prompts as requested by Avamar to complete the migration.

Start the MCS service on the destination server with

dpnctl start mcs

Take a checkpoint on the new destination Avamar and verify data with a test restore or two.

 

UPDATE - Apparently, if you're migrating from a version previous to 7.3 up to 7.3, you need an undocumented switch in your migration command. Add

--[replscript]dstencrypt=proprietary

to your migration command and you'll be off and running.

Your migration command will look something like:

nohup avrepl --operation=replicate --[replscript]dstaddr=<destination FQDN> --[replscript]dstid=root --dstpassword=<destination password> --[avtar]id=root --[replscript]fullcopy=true --ap=<source password>--[replscript]dstencrypt=proprietary --send-adhoc-request=false --max-streams=8 --[replscript]timeout=0 --debug &

Avamar: single node sizing

When you replace a grid with a single node, there is often concern from customers about the amount of metadata on the grid, and if a single node will have enough storage. Regardless of the size of the grid, the metadata space is limited to 3.9TB, which (depending on the amount of deduplication you're getting) amounts to somewhere in the 500-750TB of backup data range. Conveniently, an M1200 utility node has 3.9TB of useable space. So, regardless of the size of your grid, it can be replaced with a M1200 Single Node Avamar. As long as you don't store any data on the node itself. There is a procedure to lock the Avamar from receiving backup data, and it's a good idea to run the procedure as part of the migration (or new installation) of a single node Avamar.