Hi,
I have a Red Hat 9 machine (kernel 2.4.20-8smp) with Cisco VPN client
4.6.00 (upgraded from 4.0.1 with which I had the same problem). The machine is behind our PIX 515E. I am connecting to a 3002 concentrator at the other side but I have no access to this.The machine has 2 local interfaces, 10.0.0.22/24 and 192.168.254.22/24 and I use SSH to get to it via port forwarding on the PIX so it has a unique external address in that sense.
The VPN comes up fine. The big problem I have is that as soon as I start the VPN client I lose external access to the machine. The VPN client reports "Local LAN Access is disabled" when connecting even though I have EnableLocalLAN=1 in the vpnclient.ini. I can still get to my machine via another machine on the same local LAN, and *only* through the 10.0.0.22 address, but it becomes very slow. Previously, I was completely losing all access to the machine until the VPN connection presumably exited because of inactivity and I could get back on! I have to restart the network service and re-add the default gateway back to the routing table to get everything back to normal.
Why is the VPN client messing up the IP access to the machine to such a degree? Or is it partially or completely due to my network setup? This is just really what I need to address as I can't use this VPN client in production as it stands.
Not so serious, but another thing I don't quite understand is that the concentrator admin says that initially the problems were because of interference between my internal network and the virtual client IP and its netmask because I use a 10. address and so does the VPN network. The address I have to contact on their side is 10.129.128.100/24. So he says he's forced the virtual interface IP to be 192.168.255.x and used NAT on his side to see it as 10.128.255.x as it should be. I don't understand why he's done this. The VPN connection statistics show the correct route for the remote side and I can connect to 10.129.128.100 as I need to.
Cheers, Ian