Views:

Summary

If the production network contains the same subnet as the docker0 network on the Master appliance, you will not be able to communicate with the DPX Master server appliance.

Symptoms

  • If you are on a host on the 172.17.x.x, 172.18.x.x, 172.19.x.x or 172.20.x.x subnet, you will not be able to reach the DPX Master server appliance with the docker default bridge.
  • Moreover, the DPX Master server appliance with the default docker bridge will not be able to ping anything in the 172.17.x.x, 172.18.x.x, 172.19.x.x or 172.20.x.x subnet (this includes the gateway), and thus cannot route to that address (as it routes to the internal one). 

Resolution

To resolve the routing problem, the docker0 network on the Master Server appliance will need to be updated to a non-production network by following these steps:

  • Open the vSphere Console
  • Browse your DPX Master server appliance VM
  • Launch the remote Console 
  • Log into the Appliance with the dpxadmin user  
  • Create the file /etc/docker/daemon.json with a new subnet address list for docker0 and bridge interfaces not in use in the current production network e.g 10.200.0.1/16.
    • [dpxadmin@dpx-dev docker]$ sudo vi /etc/docker/daemon.json
{
       "default-address-pools": [
               {"base":"10.200.0.0/16","size":16},
               {"base":"10.201.0.0/16","size":16},
               {"base":"10.202.0.0/16","size":16},
               {"base":"10.203.0.0/16","size":16},
               {"base":"10.204.0.0/16","size":16},
               {"base":"10.205.0.0/16","size":16}
       ]
}
  • Restart docker
$ sudo systemctl restart docker 
  • Validate:
$ ip addr show docker0

3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:d5:4b:2c:be brd ff:ff:ff:ff:ff:ff
    inet 10.200.0.1/16 brd 172.25.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d5ff:fe4b:2cbe/64 scope link
       valid_lft forever preferred_lft forever