Views:

Summary



IA: GetLastError(3):The system cannot find the path specified"|While running catalog condense operation , we will see the following error in the job report : "SNBNCX6025J NDMP server entry: Type(NDMP_LOG_ERROR), ID(3544), Text(Failed to lock [G:/.BackupExpressSnapshots/SSSV_clst-share-xrs/{BEX-473A-51029B641388}/[clst-share-xrs]BEX_UMD-WNCLUST@{98C484C2}/BEXIMAGE.RAW]. Probably in use by IA: GetLastError(3):The system cannot find the path specified. )" "

Symptoms



While running catalog condense operation , we will see the following error in the job report :

"SNBNCX6025J NDMP server entry: Type(NDMP_LOG_ERROR), ID(3544), Text(Failed to lock [G:/.BackupExpressSnapshots/XXXXXXX/{BEX-473A-51029B641388}/[JOBNAME]NOENAME@{98C484C2}/BEXIMAGE.RAW]. Probably in use by IA: GetLastError(3):The system cannot find the path specified. )"

or 

SNBNCX6025J NDMP server entry: Type(NDMP_LOG_ERROR), ID(2784), Text(Previous snapshot consolidation error: Thread[2848], Time[Thu Jun 01 15:17:51 2017], snapshot[H:/.BackupExpressSnapshots/SSSV_XXXXX/{BEX-483A-58A55BAD00B0}])

The cause for this error could be one of the following : "Existing IA map on that snapshot " Or due to an in-active lock on the snapshot which might have occured in the past" .



Resolution



Open the DPX GUI and check if there are any existing IA map's relevant to the snapshot .

If you are not sure on the node name on which there could be an IA map from this snapshot, please perform the following :

1) Login to the OSS node and check the file "BexDevices.conf" .

Open, but do not edit, the following text file on the OSS server : {BEX Directory\StarWind\BexDevices.conf. This file contains one line per snapshot with an IA mapped volume.

The host name listed after the string Initiator_Host_Name:" is the hostname which has access to the BEXIMAGE.RAW file

The above would show the host name which has IA map's active .

If yes, then unmap the snapshot and run the condense operation .

If there are no existing IA map's , then the possibility is that there was a held lock on the snapshot at some point in the past which for some reason could not be released gracefully , in this case the condense job report will log the error "Failed to lock" . If this is the case , please follow the below procedure to determine the process which has a lock on that snapshot .

The best way to find which process holds the lock would be to use a utlity called "process explorer" on the OSS server . This is a windows sys internals utility which would show the details on the locked files


Below is the reference to the the technet article on using "process explorer" :

technet.microsoft.com/en-in/sysinternals/bb896653.aspx

You can download this tool and run it as per the instructions in the article . This should definitely help us to determine the lock . The possibility is that our modules "nibbler.exe or rawtoc" could be holding the lock on the BEXIMAGE.RAW file . Once the lock is cleared the condense should run successfully

Using process explorer , determine the actual process which was holding the lock . Make a note of it and unlock it .

Please follow the below steps :

a) Ensure that no backups are running against the Advanced Server

b) Stop the DPX CMAgent service

c) Stop the DPX Advanced Protection Manager service (this will stop the nibbler.exe, it should also stop the rawtoc_uni.exe)

d) Please verify with Process explorer that there are no further locks

e) Restart the DPX CMAgent Service

Restart the DPX Advanced Protection manager service

Now , catalog condense should be run, the first one may have an error which is expected since it will report on the previous errors. Further condense runs should complete without error.

If the process explorer does not show-up any locks on BEXIMAGE.RAW , then there is a possibility that a DP (double protection) job could be running in the backup ground . Confirm the same .

** If the error is due to the fact that the snapshot itself does not exist on the XRS server "The system cannot find the file specified" , then please follow the steps below :


Login to the OSS node , navigate to the XRSDB folder . Under XRSDB , check the directory listing and look for a folder by name "MarkDeletedSnapshots" and confirm if there are any references to the problematic snapshot (in this case it is SSSV_clst-share-xrs/{BEX-473A-51029B641388}/[clst-share-xrs]BEX_UMD-WNCLUST@{98C484C2}/BEXIMAGE.RAW )

You can use the "dir /s" command to list out the files . Example shown below :

D:\Bex\Program files\XRSDB> dir /s > c:\output.txt

Directory of C:\Program Files\BEX\XRSDB\markDeletedSnapshots\G\SSSV_clst-share-xrs

04/19/2013 11:00 AM <DIR> .

04/19/2013 11:00 AM <DIR> ..

Look for the directory output and check for any existence of qtree/snapshot . If the output is matching to the "problematic snapshot" , you can delete that entry from the above directory and recycle the "advanced protection manager" service and re-run the catalog condense which should fix the issue