Cerberus backup fails to restore correctly

Description

After a backup is performed and stashed/stored when a restore is issued to restore the backup onto another node a green confirmation prompt is correctly seen with a message "restore started".

A few seconds later the machine is shown in the stopped state as if the restore has succeeded yet you can not start it and the actual backup has not been restored to the node at the zfs level. It does show via vmadm list on the node e.g.

[root@smartosn2 ~]# vmadm list | grep Dalm
1bab921a-c37e-40ad-adec-b6f8be13e820 OS 2048 stopped DalmatinerDB
[root@smartosn2 ~]# zfs list -t all | grep 1bab921a-c37e-40ad-adec-b6f8be13e820
[root@smartosn2 ~]#

This does not exist on disk on the node as per above.

Log will be attached shortly and gathered via the "fifoadm diag" command.

Note : you will see errors in the logs related to me stopping the DalmatinerDB Zone to move it to a more resilient node - the log errors are due to the fact that the config files still had an entry pointing "folsom_ddb.ip" to 10.1.1.117 which is the DalmatinerDB zone that was backed up for restoration on another node. Once commented out errors were resolved.

Environment

None

Status

Assignee

Heinz N. Gies

Reporter

Mark Slatem

Labels

Components

Fix versions

Affects versions

Priority

High
Configure