Protocol stack - disadvantages (revision)

Another problem is that the higher level layers can't see what's in the lower layers, implying that an application can't debug where on a connection a problem is, or what exactly the problem is.

Also, the higher level layers can't control all aspects of the lower layers, so they can't modify the transfer system if beneficial (like controlling windowing, header compression, CRC/parity checking, et cetera), nor specify routing, and must rely on the lower protocols working, and can't specify alternatives when there's problems.

Regards,

Reply to
Arthur Hagen
Loading thread data ...

I'm currently revising for my third year exams and came across the following question (from a past paper):

The major advantages with the layered approach to network protocols (eg. the OSI model) include modularity and manageability. List at least two major disadvantages of such an approach. Explain your answer.

Any suggestions? All I can think of is performance loss... Which I don't think is a major issue. I suppose it can be difficult to decide where to put things. For example IPsec, network layer (therefore transparent) vs. higher layers (better protection*).

Cheers,

Ben

  • Assuming it is used correctly... (End-to-end cryptography in the hands of the user is more secure than network layer protection [assuming user's can manage it...])
Reply to
bensmyth

there are some funny things about the layers ... for instance the hostname->ip-address mapping is effectively normally a call done by applications ... which then requests a connection based on the ip-address. in the case of multiple A-records (domain name system maps the same hostname to multiple ip address) ... there is some latitude about which ip-address the application may choose to use and/or retry if it is unable to make a connection.

in the case of "multihomed" hosts with connections into various strategic backbone locations in the web ... the application layer may have some knowledge about the best choice of which of the ip-addresses to try.

part of this has been that the traditional layered architecture paradigms have followed straight-forward message flows. The things that allow such architectures to operate have tended to be described as some variation on "service" applications that operate outside the straight-forward message flows ... aka some of these implementations refer to this as out-of-band control functions ... there is an RFC someplace talking about the differencies of internet TCP w/o out-of-band control channel compared to the (much better) TCP originally implemented on arpanet that provide support for out-of-band control channel.

some of multiple A-record and multihome recovery issues were aggrevated when the internet switched from arbritrary/anarchy routing to hierarchical routing in the ealy 90s (when the internet was much smaller, infrastructure could look for alternate paths to the same interface ... however that didn't scale well ... requiring the switch-over to hierarchical routing). With hierarchical routing change-over ... there was much more of requirement for multihoming into different parts of the internet backbone (for availability).

Reply to
Anne & Lynn Wheeler

another example ... also from the early/mid 90s ... about the same time as the switch-over to hierarchical routing was ipsec vis-a-vis SSL. ipsec was suppose to handle all the function ... totally encapsulated in the lower-level protocol levels.

SSL came along at the application level and subsume some amount of the function being projected (at the time) for ipsec. the whole certificate and public key stuff was supposed to be the lower-level function in ipsec (using publickey stuff to setup transport layer encrypted channel). SSL did all that ... but SSL in the application/browser implementation (w/o requiring anybody to change the machine's protocol stack and/or operating system) also used the same public key certificate to check whether the domain name typed into the browser was the same domain name on the certificate. in the ipsec scenario it would have been handled all at the lower level ... which had no idea what a person had typed in for a URL at the application layer. If the certificate had all be stripped away at the lower level ... the browser application would have had no way of comparing the domain name in the certificate to the domain name typed in as the URL.

Reply to
Anne & Lynn Wheeler

I see no reason why an application should try to handle routing (IMHO one should let the routers just do their job) or offer debugging possibilities for problems on lower layers. Implementing that would mean much more code and thus a less robust stack (which no longer would be a stack).

Wolfgang

Reply to
Wolfgang Kueter

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.