When you create a network extension, you have the option to enable Mobility Optimized Networking (MON). MON can help to reduce network latency between VMs that have been migrated to a network extension and other network resources at the same site as that extension. By default, a VM that has been migrated to an extended network will still be routing traffic back through it’s default gateway at the source site pre-migration. This can result in traffic having to cross the tunnel between the NE appliances at the source and destination sites twice just to reach a network resource that may be at the same site.
With MON, you can have the migrated VM’s default gateway be assumed by the T1 gateway at the destination site. This should help to drastically reduce latency if much of the traffic is between the migrated VM and other resources at the destination site.
In the my earlier post, HCX+ Network Extension, several networks were extended from the source to the destination site. This resulted in new segments being created in NSX at the destination site. For this example. we’ll be working with the cda-db-seg and cda-app-seg segments. You can see how these look at the destination site in the following screenshots:




Note that for both of these segments, the Gateway Connectivity option is set to Off.
Before any of the app or db VMs are migrated, we’ll take a look at some basic network configuration on each of them as well as test latency between them.
root@cda-db-01 [ ~ ]# ip route
default via 172.31.2.1 dev eth0 proto static
172.31.2.0/24 dev eth0 proto kernel scope link src 172.31.2.2
root@cda-db-01 [ ~ ]# ping 172.31.1.2 -c 10
PING 172.31.1.2 (172.31.1.2) 56(84) bytes of data.
64 bytes from 172.31.1.2: icmp_seq=1 ttl=63 time=4.05 ms
64 bytes from 172.31.1.2: icmp_seq=2 ttl=63 time=2.49 ms
64 bytes from 172.31.1.2: icmp_seq=3 ttl=63 time=2.90 ms
64 bytes from 172.31.1.2: icmp_seq=4 ttl=63 time=2.91 ms
64 bytes from 172.31.1.2: icmp_seq=5 ttl=63 time=2.43 ms
64 bytes from 172.31.1.2: icmp_seq=6 ttl=63 time=2.14 ms
64 bytes from 172.31.1.2: icmp_seq=7 ttl=63 time=2.61 ms
64 bytes from 172.31.1.2: icmp_seq=8 ttl=63 time=4.69 ms
64 bytes from 172.31.1.2: icmp_seq=9 ttl=63 time=2.23 ms
64 bytes from 172.31.1.2: icmp_seq=10 ttl=63 time=2.62 ms
--- 172.31.1.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 24ms
rtt min/avg/max/mdev = 2.138/2.907/4.691/0.785 ms
I’m pinging the cda-app-01 VM, which is on a different segment than the cda-db-01 VM but under the same NSX infrastructure. You can see that the average latency between the two VMs is 2.907 ms.
root@cda-db-01 [ ~ ]# arp -a
_gateway (172.31.2.1) at 02:50:56:00:14:00 [ether] on eth0
Here you can see that the MAC of the default gateway (an interface on the T1 service router in NSX at the source site) is 02:50:56:00:14:00.
Similar statistics are observed for the cda-app-01 VM. The only big difference is the MAC of the gateway, which is different because it resides on a different interface of the same T1 service router.
root@cda-app-01 [ ~ ]# arp -a
_gateway (172.31.1.1) at 02:50:56:00:54:00 [ether] on eth0
Next, we’ll take a look at the T1 service router on the source side.
nsx-edge-01a> get logical-router
Thu Oct 19 2023 UTC 10:29:37.473
Logical Router
UUID VRF LR-ID Name Type Ports Neighbors
4de651b3-0bff-4fcf-acad-87ba55047ce0 0 5 SR-TIER-1_CDA SERVICE_ROUTER_TIER1 9 2/50000
ccb64b08-dbcd-4fbd-a32b-739250a736ee 1 1 DR-Tier-0_CDA DISTRIBUTED_ROUTER_TIER0 5 2/50000
c6e38920-568f-4e22-9556-ffae71f8076e 2 2 SR-Tier-0_CDA SERVICE_ROUTER_TIER0 5 1/50000
736a80e3-23f6-5a2d-81d6-bbefb2786666 4 0 TUNNEL 3 1/5000
The T1 service router is on VRF 0 so we’ll switch to that.
nsx-edge-01a> vrf 0
If we look at the interfaces, we’ll find that the two that correspond to the gateway addresses noted for each VM earlier (this output is truncated).
nsx-edge-01a(tier1_sr[2])> get interface
Interface : 6a9c6229-578d-4197-a3a9-073af162aee3
Ifuid : 272
Name : t1-TIER-1_CDA-default-cda-db-int-svclrp
Fwd-mode : IPV4_ONLY
Mode : lif
Port-type : service
IP/Mask : 172.31.2.1/24
MAC : 02:50:56:00:14:00
VNI : 73728
Access-VLAN : untagged
LS port : 84cf9c05-dd50-4541-b2cc-995903f5b98c
Urpf-mode : STRICT_MODE
DAD-mode : LOOSE
RA-mode : SLAAC_DNS_THROUGH_RA(M=0, O=0)
Admin : up
Op_state : up
Enable-mcast : False
MTU : 7900
arp_proxy :
Interface : 702b67bf-158d-425e-8f5f-96d55bcb9905
Ifuid : 281
Name : t1-TIER-1_CDA-default-cda-app-int-svclrp
Fwd-mode : IPV4_ONLY
Mode : lif
Port-type : service
IP/Mask : 172.31.1.1/24
MAC : 02:50:56:00:54:00
VNI : 69632
Access-VLAN : untagged
LS port : b44b2d83-29b6-456b-b75d-84689c39cacf
Urpf-mode : STRICT_MODE
DAD-mode : LOOSE
RA-mode : SLAAC_DNS_THROUGH_RA(M=0, O=0)
Admin : up
Op_state : up
Enable-mcast : False
MTU : 7900
arp_proxy :
You can also look at the forwarding rules to see that there is some basic routing in place for the noted segments.
nsx-edge-01a(tier1_sr)> get forwarding
Thu Oct 19 2023 UTC 10:34:10.846
Logical Router
UUID VRF LR-ID Name Type
4de651b3-0bff-4fcf-acad-87ba55047ce0 0 5 SR-TIER-1_CDA SERVICE_ROUTER_TIER1
IPv4 Forwarding Table
IP Prefix Gateway IP Type UUID Gateway MAC
0.0.0.0/0 100.64.0.0 route 39884f86-6d3a-4306-8773-ddf82b4b6820 02:50:56:56:44:52
100.64.0.0/31 route 39884f86-6d3a-4306-8773-ddf82b4b6820
100.64.0.1/32 route ee5dd71d-43c6-52ae-b003-c57104ce8cd1
127.0.0.1/32 route 0ccac78e-a425-40e4-8937-71587f051295
172.31.0.0/24 route b56dda13-053f-493d-8b1f-c78d58108d27
172.31.0.1/32 route ee5dd71d-43c6-52ae-b003-c57104ce8cd1
172.31.1.0/24 route 702b67bf-158d-425e-8f5f-96d55bcb9905
172.31.1.1/32 route ee5dd71d-43c6-52ae-b003-c57104ce8cd1
172.31.2.0/24 route 6a9c6229-578d-4197-a3a9-073af162aee3
172.31.2.1/32 route ee5dd71d-43c6-52ae-b003-c57104ce8cd1
172.31.3.0/24 route cd30e216-6f88-46b2-a08f-4ba40efe8694
172.31.3.1/32 route ee5dd71d-43c6-52ae-b003-c57104ce8cd1
IPv6 Forwarding Table
IP Prefix Gateway IP Type UUID Gateway MAC
::/0 fc87:b009:4806:4c00::1 route 39884f86-6d3a-4306-8773-ddf82b4b6820
::1/128 route 0ccac78e-a425-40e4-8937-71587f051295
fc87:b009:4806:4c00::/64 route 39884f86-6d3a-4306-8773-ddf82b4b6820
fc87:b009:4806:4c00::2/128 route ee5dd71d-43c6-52ae-b003-c57104ce8cd1
We can also look at the T1 service router on the destination site and observe a similar configuration for the segments at that location (it will act as a “before” view when we modify the configuration later).
nsx-edge-01b> get logical-router
Thu Oct 19 2023 UTC 10:35:23.892
Logical Router
UUID VRF LR-ID Name Type Ports Neighbors
736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL 3 1/5000
32a7f5f1-6809-45f3-b622-5528d84633c8 1 1 DR-Tier-0_CDA DISTRIBUTED_ROUTER_TIER0 5 2/50000
17d39607-c2be-4206-8830-97ece6114f4b 2 5 SR-TIER-1_CDA SERVICE_ROUTER_TIER1 9 2/50000
d575e6a2-d144-44cf-8774-894d24d6683f 3 2 SR-Tier-0_CDA SERVICE_ROUTER_TIER0 5 1/50000
nsx-edge-01b> vrf 2
nsx-edge-01b(tier1_sr)> get forwarding
Thu Oct 19 2023 UTC 10:35:46.276
Logical Router
UUID VRF LR-ID Name Type
17d39607-c2be-4206-8830-97ece6114f4b 2 5 SR-TIER-1_CDA SERVICE_ROUTER_TIER1
IPv4 Forwarding Table
IP Prefix Gateway IP Type UUID Gateway MAC
0.0.0.0/0 100.64.0.0 route 86a845e4-75fe-413b-b61a-1196469d2f2c 02:50:56:56:44:52
100.64.0.0/31 route 86a845e4-75fe-413b-b61a-1196469d2f2c
100.64.0.1/32 route e1b2291f-770c-5d5a-ba9e-3f5e5bc39d06
127.0.0.1/32 route 225ad34c-7786-40ae-ae07-46f729b9967a
172.32.0.0/24 route e92d85d5-a193-4dde-b0e8-13e774d8c702
172.32.0.1/32 route e1b2291f-770c-5d5a-ba9e-3f5e5bc39d06
172.32.1.0/24 route 2015a02c-692d-462c-a860-f24c83197ea3
172.32.1.1/32 route e1b2291f-770c-5d5a-ba9e-3f5e5bc39d06
172.32.2.0/24 route 5ffdee88-6b6f-435f-b050-955259b9eac2
172.32.2.1/32 route e1b2291f-770c-5d5a-ba9e-3f5e5bc39d06
172.32.3.0/24 route eb4c7384-5161-4f35-ae93-a079cff01b8c
172.32.3.1/32 route e1b2291f-770c-5d5a-ba9e-3f5e5bc39d06
IPv6 Forwarding Table
IP Prefix Gateway IP Type UUID Gateway MAC
::/0 fcea:81ca:3ac3:e000::1 route 86a845e4-75fe-413b-b61a-1196469d2f2c
::1/128 route 225ad34c-7786-40ae-ae07-46f729b9967a
fcea:81ca:3ac3:e000::/64 route 86a845e4-75fe-413b-b61a-1196469d2f2c
Now that we have understanding of the configuration at both sites, we can migrate the app and db VMs from the source site to the destination.

Click the New Mobility Group button. Set the source and destination sites appropriately. In this example. the source site is Palo Alto and the destination site is Denver.

Set a meaningful name for the Mobility Group and click the Next button.
Select the VMs to migrate. In this example, they are the cda-app-01 and cda-db-01 VMs.

Click the Next button.

You’re presented with a summary of how the migration is going to play out with default settings.
- Replication Assisted vMotion (RAV) will be used as the migration mechanism.
- The VMs will be placed in to the RegionB01-Compute cluster and on to the vol2 datastore (these are the only choices at the destination site).
- The networks to be used at the destination site are the network extension segments associated with the segments at the source site.
These options are fine for this example but you can change them as needed, or click the Advanced Mode toggle to get even more options.
Click the Next button.

If everything looks good, click the Start Migration button.
After a short time, the migration will be complete.

Let’s check out some of the same networking information and performance data we looked at prior to the migration.
First, we’ll clear the arp entry for the gateway to ensure we’re looking at current data.
root@cda-db-01 [ ~ ]# arp -d _gateway
root@cda-db-01 [ ~ ]# ping 172.31.1.2 -c 10
PING 172.31.1.2 (172.31.1.2) 56(84) bytes of data.
64 bytes from 172.31.1.2: icmp_seq=1 ttl=63 time=11.7 ms
64 bytes from 172.31.1.2: icmp_seq=2 ttl=63 time=9.98 ms
64 bytes from 172.31.1.2: icmp_seq=3 ttl=63 time=9.67 ms
64 bytes from 172.31.1.2: icmp_seq=4 ttl=63 time=8.96 ms
64 bytes from 172.31.1.2: icmp_seq=5 ttl=63 time=8.65 ms
64 bytes from 172.31.1.2: icmp_seq=6 ttl=63 time=8.08 ms
64 bytes from 172.31.1.2: icmp_seq=7 ttl=63 time=8.54 ms
64 bytes from 172.31.1.2: icmp_seq=8 ttl=63 time=9.65 ms
64 bytes from 172.31.1.2: icmp_seq=9 ttl=63 time=9.09 ms
64 bytes from 172.31.1.2: icmp_seq=10 ttl=63 time=8.18 ms
--- 172.31.1.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 25ms
rtt min/avg/max/mdev = 8.082/9.253/11.731/1.032 ms
You can see that the average network latency between the db and app VMs has increased to 9.253 ms, up from 2.907 prior to migrating. The path that these packets have to take, across the tunnel between the NE appliances, through the gateway at the source site, and then back though that same tunnel, is incurring a large cost.
We can also validate that both migrated VMs are still using the same gateway device as was used prior to the migration.
root@cda-db-01 [ ~ ]# arp -a
_gateway (172.31.2.1) at 02:50:56:00:14:00 [ether] on eth0
root@cda-app-01 [ ~ ]# arp -a
_gateway (172.31.1.1) at 02:50:56:00:54:00 [ether] on eth0
This is where MON can come into play to help with this. Back in the HCX+ UI, if you click the three vertical dots next to an extended network, you’ll see an option to Activate Mobility Optimized Networking.


Click the Activate button.

You’ll see the stats of the network extension as In Progress for a very short time.
Once this is completed, you can see that there is a change in the network extension segment in NSX at the destination site.

Note that the Gateway Connectivity option is now set to On.
You might think that this has done everything you need but there have really been no changes yet that will affect the flow of traffic to/from the migrated VMs.
root@cda-db-01 [ ~ ]# arp -d _gateway
root@cda-db-01 [ ~ ]# ping 172.31.1.2 -c 10
PING 172.31.1.2 (172.31.1.2) 56(84) bytes of data.
64 bytes from 172.31.1.2: icmp_seq=1 ttl=63 time=9.42 ms
64 bytes from 172.31.1.2: icmp_seq=2 ttl=63 time=8.71 ms
64 bytes from 172.31.1.2: icmp_seq=3 ttl=63 time=9.32 ms
64 bytes from 172.31.1.2: icmp_seq=4 ttl=63 time=10.3 ms
64 bytes from 172.31.1.2: icmp_seq=5 ttl=63 time=9.99 ms
64 bytes from 172.31.1.2: icmp_seq=6 ttl=63 time=9.59 ms
64 bytes from 172.31.1.2: icmp_seq=7 ttl=63 time=9.44 ms
64 bytes from 172.31.1.2: icmp_seq=8 ttl=63 time=11.6 ms
64 bytes from 172.31.1.2: icmp_seq=9 ttl=63 time=10.6 ms
64 bytes from 172.31.1.2: icmp_seq=10 ttl=63 time=9.62 ms
--- 172.31.1.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 25ms
rtt min/avg/max/mdev = 8.712/9.862/11.602/0.771 ms
root@cda-db-01 [ ~ ]# arp -a
_gateway (172.31.2.1) at 02:50:56:00:14:00 [ether] on eth0
We’re still seeing the increased network latency between the cda-db-01 and cda-app-01 VMs, and the gateway device is unchanged (the MAC is the same).
Back on the Network Extensions page in HCX+, click on the Member Virtual Machines tab.

You can see that the VM Location is clittle-Denver and the Router Location is clittle-Palo Alto.

Select the cda-db-01 VM and click the Modify Router Location button.

Note: This warning should be heeded as I saw a second or two of disrupted network traffic to VMs on the segment every time I made this change.

Choose the appropriate Target Router Location (clittle-Denver in this example) and click the Modify button.

Note that the Router Location is now clittle-Denver.
Repeat this for the cda-app-01 Network Extension.
Now, we can perform the same tests again on the cda-db-01 and cda-app-01 VMs.
root@cda-db-01 [ ~ ]# arp -d _gateway
root@cda-db-01 [ ~ ]# ping 172.31.1.2 -c 10
PING 172.31.1.2 (172.31.1.2) 56(84) bytes of data.
64 bytes from 172.31.1.2: icmp_seq=1 ttl=63 time=2.39 ms
64 bytes from 172.31.1.2: icmp_seq=2 ttl=63 time=0.864 ms
64 bytes from 172.31.1.2: icmp_seq=3 ttl=63 time=0.567 ms
64 bytes from 172.31.1.2: icmp_seq=4 ttl=63 time=1.00 ms
64 bytes from 172.31.1.2: icmp_seq=5 ttl=63 time=0.604 ms
64 bytes from 172.31.1.2: icmp_seq=6 ttl=63 time=0.931 ms
64 bytes from 172.31.1.2: icmp_seq=7 ttl=63 time=0.419 ms
64 bytes from 172.31.1.2: icmp_seq=8 ttl=63 time=0.792 ms
64 bytes from 172.31.1.2: icmp_seq=9 ttl=63 time=0.525 ms
64 bytes from 172.31.1.2: icmp_seq=10 ttl=63 time=0.812 ms
--- 172.31.1.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 71ms
rtt min/avg/max/mdev = 0.419/0.890/2.388/0.531 ms
root@cda-db-01 [ ~ ]# arp -a
_gateway (172.31.2.1) at 02:50:56:56:44:52 [ether] on eth0
The average network latency between these two VMs is now down to .89 ms, even better than it was noted at the source site.
Also, the gateway device is reporting a different MAC. Previously, it was 02:50:56:00:14:00, an interface on the T1 service router at the source site. Now it is 02:50:56:56:44:52. You can see that the MAC of the gateway for the cda-app-01 VM is also changed.
root@cda-app-01 [ ~ ]# arp -a
_gateway (172.31.1.1) at 02:50:56:56:44:52 [ether] on eth0
Let’s take a look at the NSX configuration at the destination site.
nsx-edge-01b> get logical-router
Thu Oct 19 2023 UTC 12:59:41.343
Logical Router
UUID VRF LR-ID Name Type Ports Neighbors
736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL 3 1/5000
32a7f5f1-6809-45f3-b622-5528d84633c8 1 1 DR-Tier-0_CDA DISTRIBUTED_ROUTER_TIER0 5 2/50000
17d39607-c2be-4206-8830-97ece6114f4b 2 5 SR-TIER-1_CDA SERVICE_ROUTER_TIER1 9 2/50000
d575e6a2-d144-44cf-8774-894d24d6683f 3 2 SR-Tier-0_CDA SERVICE_ROUTER_TIER0 5 1/50000
1e63f95d-ece9-40da-8e58-c14f9460bc9c 5 4 DR-TIER-1_CDA DISTRIBUTED_ROUTER_TIER1 5 0/50000
Right away, you can see that there is a new Logical Router, DR-TIER-1_CDA.
Switch to this new T1 distributed router.
nsx-edge-01b> vrf 5
Investigate the interfaces to see if we can find the gateway addresses.
nsx-edge-01b(vrf[5])> get interfaces
Interface : a8bbd9ec-6b08-427e-929b-f4a5dfa059af
Ifuid : 308
Name : infra-hcx-ne-4aa4d4be-6aad-4550-9766-6ff6c1fd5b0f-dlrp
Fwd-mode : IPV4_ONLY
Mode : lif
Port-type : downlink
IP/Mask : 172.31.1.1/32
MAC : 02:50:56:56:44:52
VNI : 65536
Access-VLAN : untagged
LS port : f9c2ad66-d848-4369-a24c-e1aada6ea1bd
Urpf-mode : STRICT_MODE
DAD-mode : LOOSE
RA-mode : SLAAC_DNS_THROUGH_RA(M=0, O=0)
Admin : up
Op_state : up
Enable-mcast : True
MTU : 7900
arp_proxy :
Interface : 895ffbff-9244-46dd-aa9b-13321bcfba29
Ifuid : 303
Name : infra-hcx-ne-4c8a91ed-7d48-413e-b203-144cd4469c6a-dlrp
Fwd-mode : IPV4_ONLY
Mode : lif
Port-type : downlink
IP/Mask : 172.31.2.1/32
MAC : 02:50:56:56:44:52
VNI : 73728
Access-VLAN : untagged
LS port : bb221969-d421-4e40-bf7d-c7e1ca84787e
Urpf-mode : STRICT_MODE
DAD-mode : LOOSE
RA-mode : SLAAC_DNS_THROUGH_RA(M=0, O=0)
Admin : up
Op_state : up
Enable-mcast : True
MTU : 7900
arp_proxy :
There they are. You can also see this in the NSX UI.
Open up the T1 and expand the Static Routes section.

Click on the 2 in the Static Routes section.

Lastly, you can see that traffic between the migrated VMs and VMs still at the source site is largely unaffected.
root@cda-db-01 [ ~ ]# ping 172.31.0.2 -c 10
PING 172.31.0.2 (172.31.0.2) 56(84) bytes of data.
64 bytes from 172.31.0.2: icmp_seq=1 ttl=63 time=7.75 ms
64 bytes from 172.31.0.2: icmp_seq=2 ttl=63 time=6.14 ms
64 bytes from 172.31.0.2: icmp_seq=3 ttl=63 time=6.17 ms
64 bytes from 172.31.0.2: icmp_seq=4 ttl=63 time=5.36 ms
64 bytes from 172.31.0.2: icmp_seq=5 ttl=63 time=6.17 ms
64 bytes from 172.31.0.2: icmp_seq=6 ttl=63 time=5.93 ms
64 bytes from 172.31.0.2: icmp_seq=7 ttl=63 time=7.90 ms
64 bytes from 172.31.0.2: icmp_seq=8 ttl=63 time=5.97 ms
64 bytes from 172.31.0.2: icmp_seq=9 ttl=63 time=6.55 ms
64 bytes from 172.31.0.2: icmp_seq=10 ttl=63 time=5.64 ms
--- 172.31.0.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 22ms
rtt min/avg/max/mdev = 5.362/6.358/7.895/0.796 ms