In article , Brian Desmond wrote: :We have Exchange working just fine here through Checkpoint firewalls. :There are numerous (perhaps half a dozen) sets of ports you need to :manually lock down on the Exchange servers and GCs. Once thats done, :things work like a charm assuming all those ports are open.
:With OLK2003 and Exchange 2003, it's much easier though for some :sitautions. The new RPC/HTTP requires a bunch of configuration on your :Exchange environment, but then you can open 443 to the Internet and :people can point the Outlook full client at the rpc/http frontends (can :just be your OWA cluster) and it works as well as being in the office :(perhaps a bit more latent, of course).
In PIX 6.x, according to Cisco, RPC fixup happens only in one direction. If A, behind PIX-A contacts B behind PIX-B for RPC, and B sends back a port number, then one of PIX-A or PIX-B (don't recall which) will open the port and the other won't -- from the perspective of the PIXes, one of the transactions is incoming and the other is outgoing, and the PIX only does RCP fixup in one direction.
But that's a Cisco problem, not a general Exchange problem. PIX 7.0 is supposed to have better RPC handling.
What -is- a general Exchange problem is that once you have talked to an exchange server using a particular IP+port, Exchange will continue to assume that you are available on that IP+port, and might, anywhere from 5 minutes to 3 weeks later, attempt to contact you on that IP+port. If the IP+port that the Exchange server sees is PAT, then Exchange will, for the next few weeks, regularily attempt to contact that port translation that long long since expired.
I have traced this through, and it isn't just UDP: it happens for TCP as well. Your Windows workstation contacts the Exchange server via TCP, a normal conversation complete with FIN ACK ensues (probably lasting less than 2 minutes) -- and Exchange will continue to assume that the port it saw is still valid, even though the connection was completely closed :(
I don't recall at the moment whether I've seen the following for Exchange 2003, but with Exchange 2000 for sure, we saw numerous intances in which the Exchange server attempted to contact the client, using the private internal IP of the client, not the public IP. The clients are talking to a local WINS server in order to be able to resolve the server identities, and either directly that way or indirectly through PDCs, the WINS server is learning the internal IPs. The local WINS server then spills those local IPs over to HQ's WINS server... The PIX for one does not translate the IPs that are deep in the data fields of the NETBIOS transactions, so it's the private IPs that the HQ's WINS server learns and wants to use. :(
There's another issue that comes up on the PIX, especially with Exchange 2003. There's an RPC transaction, the local PCs learn a destination port, the PIX creates a temporary ACL exemption in order to allow outgoing access to the destination. That works fine as long as the subsequent flow stays active, but eventually the TCP connection closes or the UDP flow goes idle and the PIX times out the flow. Later, the local PC wants to send more -- and instead of assuming that the destination port might have changed and so doing another RPC, the local PC just goes ahead and tries to use the destination port. As the temporary ACL exemption is gone by then, the local PC can't get through our outgoing filters. The local PC will try for *days* on the same port, waiting for the remote system to answer, when if it simply did another RPC query it would get through in seconds... The only work-around for this is to open up our outgoing filters to allow outgoing access to a hundred thousand or so ports on every component of every Exchange server component around our (continent-wide) organization. :(
Then there's another Exchange-related problem we see. This one might technically be at the NT Domain Authentication level... it's pretty hard to tell. Anyhow, from time to time, one of our Windows PCs takes a fancy and starts trying to make NETBIOS contacts with 10 to
50 different Windows PCs in other branches of our organization. Each such incident extends for numerous hours (3 - 14 hours, maybe longer); each destination is tried a number of times, but with the exact destinations tending to shift a bit over time. These destinations are sometimes PDCs or BDCs, but it is quite common for them to be just plain Windows PCs with no special attributes. I figure the remote locations must be learned via WINS. The source systems for this are, as best I recall, always Windows XP. When I first started investigating this problem a few years ago, when XP was still relatively new, on one particular run the destinations were also exclusively XP -- but I don't know if that still holds. On some spot-checks, the systems initiating the connections never had a detected virus or spyware.
Now, one important factor in the behaviours we see, is that we control *outgoing* connections as well as incoming. If you do not control outgoing connections, you would never notice half of these problems.