ASA5510 and Backup Exec

In our environment we've installed a Cisco ASA5510 FW. Inside the FW we've a W2003 Backupserver running Backup Exec 10.0. In the DMZ of the FW we've placed a W2K webserver. Currently we experiencing intermittent problems regarding the backup.

  1. The backup can run smoothly for several days and the without any warning and pattern (none as far as we've figured out) the backup begins to fail. The amount of data and time before it fails varies.

  1. A simple change in the backup job will correct the problem for another couple of days. For instance change NIC.

  2. We've followed the recommendations from Veritas with regard what ports needs to be open. Netstat verifes that the connections succeds.

  1. Debugging logging of the FW is inconclusive. The logs shows a reset-o.

Has anyone experienced this kinds of problems before? We're seriuosly contemplating switching to just create a VMWare script and get the backup out that way, but it will cause slower restore time in case we need to restore single files.

Best regards Petter Fossum

Reply to
petter.fossum
Loading thread data ...

The company I work for had a similar problem, we were seeing consistently slow backups, even with no rules or filtering configured on the ASA. It turned out to be a problem with the ASA's gigabit-ethernet auto-negotiation. I am not sure whether they determined it was negotiation between the ASA and the DMZ switch or the ASA and the inside switch, but we moved to hardcoding these interfaces and the issue went away.

Although your problem is sporatic, it has been my experience that auto-negotiation problems with ethernet and gigabit-ethernet can be sporatic in nature. You might want to look into hardcoding those interfaces, even perhaps check the NIC on the servers and see if they are taking any errors, might be a problem with the server talking to the network segment.

H>In our environment we've installed a Cisco ASA5510 FW. Inside the FW

Reply to
Robert B. Phillips, II

In addition to that also take not that hardcoding doesn't always work. That's right. Even if you hardcode the speed and duplex on a Pix that doesn't mean it will honor those settings at all. I ran into this very problem on a 525 when I upgraded to 7.0.4. The inside interface connected to a 48-port 10/100 blade in a 6509. Both sides were hardcoded to 100 full. However the Pix flat out ignored this setting and instead continued with auto-negotiation and somehow chose 100 half. The network backups which were pegging the inside interface out at ~100Mbps all night long dwindled down to ~4Mbps. The inside interfacec should massive amounts of late collisions. The 6509 port status showed nothing out of the ordinary. The other Pix interfaces (4-port 10/100 expansion card) were connected to DMZ switches (2950s) and experienced no problems. The short-term fix was to move the "inside" interface (by name) to one of the ports on the 4-port card which was no simple matter. You can't actually "move" the nameif to another interface. The Pix thinks its smart and changes every command in which you reference "inside" to whatever you name that physical interface to. You have to rename the "inside" interface because you can't have two ints named the same thing. The whole notion of giving interfaces vanity names is silly IMHO but I digress.

Basically don't trust the hardcoded settings. Always check both sides of the interface stats.

Also, I highly recommend setting up MRTG, Cacti or one of 100 other tools for graphing interface I/O. You need solid numbers to really get a feel for how things are working.

J

Robert B. Phillips, II wrote:

Reply to
J

Reply to
petter.fossum

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.