Views:

Summary



A backup Job could fail with an out of memory condition, due to a very high number(100s) of mount points on a linux client caused by services like Socker or Kubernetis

Symptoms



When a Linux client has a large number of volumes/mount points to be auto excluded from an job the job may fail allocating more that 4GB of memory. The large mount number of mount points is caused by Kubernetis or Docker install on the host with a large number(100s) of containers. Manual exclusions of the mount points have no affect.

Job Error
<ClientIP> ssbrowse <date> SNBFBR5100E fb_procqfileinfo : error allocating (4294946809) bytes
<MasterIP> ssjobhnd <date> SNBJH_3053E *** File browser on node <nodeName> returned error 5 ***
Errors in fbl log on client side
<date> SNBFBR5100E fb_procqfileinfo : error allocating (4294946809) bytes
<date> SNBFBR5201E fb_recmessage : rc (5) from (fb_procqfileinfo)
<date> SNBFBR1006E Message TOK_QFILEINFO processing failed, rc=5
<date> SNBFBR5201E fb_wait : rc (5) from (fb_recmessage)

 

Resolution



This behavior will be addressed in a DPX Maintenance Update. However, a complete Maintenance Update is not available at this time. Contact Catalogic Software Data Protection Technical Support for obtaining the required update referencing Issue ID DPSUST-4809.