TCP/IP MTU values + DSL / PPPoE

Looking for a little education or links concerning MTU values with respect to DSL connections and PPPoE.

It seems we can't connect from our home via our DSL line to our school's web application called - Blackboard - There are lots of postings on the web about this DSL problem.

The normal MTU setting for a DSL line using PPPoE is 1492. With that setting, we can't use some of the Blackboard features - they hang. HOWEVER, if we modify our laptop MTU setting to 1362 - IT WORKS

Also - we have not been able to "send" or "forward" email using our school's web based Exchange OWA from home.

I found a TCP utility called TCP Optimizer that makes it easy to change and test the MTU value, and therefore easy to replicate the web application problems.

formatting link
basically can test for the largest packet, without fragmentation... which seems to be the cause of the problem.

I just tried the MTU modification to 1362 as a test, and now the Exchange OWA "sending" works. If I change the MTU back to 1492 - the OWA "sending" hangs... So - there's another web application mystery related to the MTU values.

At this point I have no idea how MTU works or what's actually involved. I'm technically curious about it, and will pursue an explanation.

The bottom line is that it appears when the local computer or router is working with an DSL + PPPoE imposed MTU setting of 1492 - things fail. When the local MTU is modified to 1362 - things work.

Any education on why a "smaller" MTU makes things work, and yet a "larger" MTU makes them fail or hang ?????

Reply to
P.Schuman
Loading thread data ...

Instead of me reinventing the wheel... Google "MTU black hole". You will find many descriptions of the problem. The easiest way to solve this problem is to set the MTU on your school's web servers to 1362. You can enable "MTU black hole detection" on your web servers, but the application will run very slow because black hole detection is slow, and is done on every connection. Setting the MTU to 1362 will have virtual no impact on performance of the servers and will solve the problem.

Scott

Reply to
Thrill5

Go to:

formatting link
(with javascript enable). This will redirect to a page that will show you the real connection settings from that web's site's point of view.

You can then adjust your own settings locally to match those. If you have greater MTU, it means that packets get fragmented along the line and some software at the other end may not be processing properly.

For instance, the Microsoft POP server requires that any command be sent in a single packet. It doesn't have the smarts to reassemble packets until it finds the CR-LF mark like one would expect software to work.

Reply to
JF Mezei

tnx for the info & links...

BTW - I guess, as a very basic level, going back to my OSI Protocol Stack model, I don't see how the Application layer gets dramatically impacted by what appears to be a TCP setting at the Transport layer ? That is what is confusing to me....

Application - Blackboard, Exchange OWA Presentation Session Transport - TCP Network Datalink Physical

Reply to
P.Schuman

here's the results from that page - but interestingly..... The reported MTU shows 1492 - which is probably what the NAT router is sending out, but the laptop is actually running with 1362 - and the problem web application works !

-- « SpeedGuide.net TCP Analyzer Results » Tested on: 01.28.2007 00:44 IP address: 70.131.xxx.xx

TCP options string: 020405ac0103030201010402 MSS: 1452 MTU: 1492 TCP Window: 259112 (NOT multiple of MSS) RWIN Scaling: 2 Unscaled RWIN : 64778 Reccomended RWINs: 63888, 127776, 255552, 511104 BDP limit (200ms): 10364kbps (1296KBytes/s) BDP limit (500ms): 4146kbps (518KBytes/s) MTU Discovery: OFF TTL: 118 Timestamps: OFF SACKs: ON IP ToS: 00000000 (0)

Reply to
P.Schuman

If you have a cisco router

int your.inside.one ip tcp adjust-mss

is your friend.

As to the layered model. The layering is not pure. Applications can for example request that TCP sets the DF flag. Applications that do this might be regarded as badly behaved, however it is allowed.

Do what you can to ensure that Path MTU discovery is working. i.e. enable ICMP undeliverable (too big) - I forget the exact term on all of your interfaces.

Reply to
Bod43

tnx for the info - I'll pass that along to the network folks... I know a lot of folks have been disabling ICMP, Pings, etc.

Reply to
P.Schuman

TCPIP transfers data in packets. Not "records". On most operating systems, doing a "READ" operation gives you a chunk of data that may or may not be complete.

So while the client may be sending an HTTP request in one IO operation, this may actually get sent in multiple TCPIP packets, so the server may need to do multiple READS to obtain all of the HTTP request. If the app is poorly written, it will wrongly expect the full HTTP header to be readable in one IO operation, so when the header arrives in multiple packets, the application reads the first packet and then declares an error because of missing header information.

(just a theoretical example).

I know this to be the case on a microsoft POP server. It expects all POP commands to be readable in a single IO operation. (as opposed to assembling a command from multiple reads until a CR/LF is found).

Reply to
JF Mezei

If the transport layer isn't "transporting" data, it will certainly affect the layers above. When a TCP connection is made, MTU is based on the END points of the connection, not the MTU the routers in the path. So if ANY of the routers between then end points have an MTU smaller than the endpoints you will see this problem. The router with a smaller MTU will drop the packet and send an "ICMP MTU exceeded" message back to the sender. In a perfect world this will make the application slower because MTU discovery is not instantaneous. In the real world the "ICMP MTU exceeded" message does not get back to the source. because of NAT or firewall settings or is ignored by the sender completely because an ICMP message does not give any information about what TCP connection/application was affected. Since the sender sends the data and doesn't know why the data isn't getting to the client, your application breaks.

Scott

Scott

Reply to
Thrill5

Mr P.Schuman

However, the "MTU Black hole" victim is not guilt-free. You have to be trying Path MTU Discovery in order to be setting the "don't fragment" bit - or perhaps there is some other out of the ordinary reason for setting the "don't fragment" bit. Perhaps an end user sending IP packets should check this first.

Otherwise just retrying, ideally using binary search logic, with different lower MTU sizes than the MTU size you know applies to the interface you are using to send the packets is the only way. I guess this is how the 1362 value was discovered. In effect, you are achieving the same result as Path MTU discovery but manually rather than automatically. Rather than use the binary search, you could even try with the values recommended in RFC 1191 Table 7-1: "Common MTUs in the Internet" just as is recommended for Path MTU Discovery when the router which rejects a packet with "don't fragment" hasn't heard that it's supposed to be RFC-1191-friendly and provide the MTU value for the interface over which it would have liked to have forwarded the packet.

Chris Mas>

Reply to
Chris Mason

Mr P.Schuman

If you SNMP GET the sysServices object value from any IP node, you will see that only a limited number of OSI "services" are actually supported by the IP instance. If the IP node ever claims to be supporting the "Session" and "Presentation" layers, it is not being truthful.

Actually you should reflect that you are being supported according to the OSI model. You are using TCP which is a *reliable* transport and it is telling you - by ending the TCP connection - that something went wrong with a lower layer. Because - as it appears - you have, in effect, misconfigured your Data Link Control layer, you are experiencing failure in your Network Layer and your Transport layer is telling you about it.

Chris Mas> tnx for the info & links...

Reply to
Chris Mason

Mr P.Schuman

Ensuring that "Path MTU Discovery" is working means checking all of the IP nodes between the client and the server on all possible paths. Do you control them all?

If you do you should ensure that ICMP Destination Unreachable - Fragmentation not Allowed" - not "too big" or "MTU exceeded" - will be returned as appropriate by each of them and preferably they will have support for specifying the value of the MTU on the interface over which they would have liked to have sent the offending packet.

Incidentally, just in case the point needs to be made, the usual way to send data over the internet is to use IP headers which do *not* have the "don't fragment" (DF) bit set. In other words, since we have a double negative and it can be confusing, if any router on the path that happens to have been chosen to get from the source IP node to the destination IP node for any one packet[1] discovers that the size of the packet exceeds the maximum allowed on the outbound interface selected by the routing tables, it is allowed to fragment the packet.

[1] This is a reason why some sort of exploratory protocol used when the TCP connection is established in order to discover all the outbound interface MTU values with the objective of selecting a minimum would not work.

Chris Mason

Reply to
Chris Mason

Mr P.Schuman

My favourite authors regarding IT matters, particularly with respect to networking, are Sir Walter Scott and Alexander Pope, The former is the author of "O, what a tangled web we weave when first we practice to deceive!" and the latter "A little knowledge is a dang'rous thing"[1] Your last sentence would appear to be an example of the latter. In other words some ICMP, such as "Destination Unreachable - Fragmentation Not Allowed" is golden even if you imagine you can do without ICMP Echo and Echo Reply aka Packet InterNet Groper.

[1] "; drink deep, or taste not the Pierian spring: there shallow draughts intoxicate the brain, and drinking largely sobers us again."

Chris Mas>

Reply to
Chris Mason

thanks for all the info - and by manually using TCP Optimizer to reset the MTU setting smaller on the laptop, we must be then forcing a fragmenting - which is NOT being perfomred by the DSL routers which are showing a slightly smaller MTU compared to other 1500byte interfaces without the PPPoE overhead.

Reply to
P.Schuman

Mr P.Schuman

According to the strict IP definition of the term, "fragmentation" occurs only when an intermediate routing node receives a packet, works out over which interface to send the packet based on its routing table, discovers that the packet is too large to send over that interface and doesn't find the "don't fragment" bit set in the packet header. The router then creates as many packets as are needed in order to forward the packet over the interface by "fragmenting" the original packet. This involves creating an IP header, essentially a copy of the IP header of the original packet, for each "fragment" and copying as much of the data as will fit for sending over the interface.

It is a matter for the receiving node's IP layer, the "network" layer, to reconstruct the original packet before passing it to the appropriate "transport" layer instance.

When you set up a TCP connection a maximum segment size (MSS) value is established for sending data. In order to allow for 20 bytes of IP header and 20 bytes of TCP header, this is 40 bytes less than the MTU value of the local interface used to send the IP packets. The TCP logic will never send any segments containing more data than can be fitted into this size of segment. It was pointed out earlier in the thread that TCP turns a blind eye to what the TCP user may happen to pass over the API each time the API is used. TCP uses its own logic to decide what to send when based on its flow control mechanism. Data are simply a stream of bytes from the time the TCP connection is established until it is terminated, strictly in the send direction in this case.

The point of the above is that by setting a smaller MTU and hence a smaller MSS you are *not* creating fragments. If as a result of setting a smaller MSS no fragments are generated on the path taken by any one IP packet, the IP layer of the receiving node is going to have a very easy time of it. The TCP layer of the receiving node will pass the data received in individual packets, strictly to be called segments, to the receiving API as it sees fit. Probably TCP will pass all the data received since it last passed any data for each call over the API without any reference to what happened to be received in arriving segments which provided the data.

Chris Mas>

Reply to
Chris Mason

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.