Route table not persisting with openvpn client

Home Forums Conduit: AEP Model Route table not persisting with openvpn client

Tagged: , ,

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #28214
    some_dev
    Participant

    I am working with a Conduit AEP, Firmware 1.7.4.
    However, I’m not sure if this is the place to post the question since I am configuring via the command line and avoiding the GUI altogether in this particular example.

    I have an openvpn client on the Conduit connected to a cloud server.
    I start the client with /etc/init.d/openvpn start. All goes well except that a few hours later messages stop getting through the Conduit.

    On investigation it appears as if the client is still connected to the server but the route table has forgotten the vpn routes.

    admin@mtcdt:~# route -n
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    0.0.0.0 192.168.88.1 0.0.0.0 UG 0 0 0 eth0
    192.168.88.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
    admin@mtcdt:~# ^C
    admin@mtcdt:~# /etc/init.d/openvpn stop
    Stopping openvpn: multitech_55-cloud.
    admin@mtcdt:~# /etc/init.d/openvpn start
    Starting openvpn: multitech_55-cloud.
    admin@mtcdt:~# route -n
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    0.0.0.0 10.8.0.29 128.0.0.0 UG 0 0 0 tun0
    0.0.0.0 192.168.88.1 0.0.0.0 UG 0 0 0 eth0
    10.8.0.1 10.8.0.29 255.255.255.255 UGH 0 0 0 tun0
    10.8.0.29 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
    10.10.0.0 10.8.0.29 255.255.240.0 UG 0 0 0 tun0
    xx.xx.xx.xx 192.168.88.1 255.255.255.255 UGH 0 0 0 eth0
    128.0.0.0 10.8.0.29 128.0.0.0 UG 0 0 0 tun0
    192.168.88.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

    If config is totaly fine even when connection is “down” :
    admin@mtcdt:~# ifconfig
    eth0 Link encap:Ethernet HWaddr 00:08:00:4A:06:1E
    inet addr:192.168.88.55 Bcast:192.168.88.255 Mask:255.255.255.0
    inet6 addr: fe80::208:ff:fe4a:61e%3069672144/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:550991 errors:4 dropped:7534 overruns:2 frame:4
    TX packets:479521 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:41884311 (39.9 MiB) TX bytes:45700820 (43.5 MiB)
    Interrupt:23 Base address:0xc000

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1%3069672144/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:685232 errors:0 dropped:0 overruns:0 frame:0
    TX packets:685232 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:86443520 (82.4 MiB) TX bytes:86443520 (82.4 MiB)
    `
    tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
    inet addr:10.8.0.30 P-t-P:10.8.0.29 Mask:255.255.255.255
    UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
    RX packets:3629 errors:0 dropped:0 overruns:0 frame:0
    TX packets:4926 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:100
    RX bytes:140893 (137.5 KiB) TX bytes:366936 (358.3 KiB)

    P.s. A bit of background: I’m configuring this over the command line because the GUI config kept freezing the Conduit.

    #28226
    some_dev
    Participant

    UPDATE:

    I have changed to using the GUI with a ‘CUSTOM’ open-vpn tunnel.
    Running over several hours shows that this problem is still present with this setup. Route are simply not persisting.
    I am thinking this could be a bug as this problem has occured with another multitech also.

Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.