A couple of fellow computer geeks and I were discussing some proposed changes to how people/processes access servers within the DMZ. The proposed solution involved routing all SSH access through a set of jump box servers. From there you could then ssh wherever you need to go. These servers also allow you to tunnel your traffic through to a server on the inside. They also allow you to setup ssh key pairs so that you do not have to enter a username/password during each hop. My initial concern is that this new policy is going to break many of the existing processes which are working with direct ssh access to all the target hosts. They assured me that any commands I run today will work when going through the new jump boxes.
My overall response to this change wasn't very positive. Are there flaws in how the implementation is being proposed. Essentially they left it up to each user to work out for themselves how to manage setting up the ssh tunnels. From what I have seen so far most people are hard coding these tunnels to specific ports. For a small set of tests/users this probably works well. However what happens when you end up with different groups of users who clobber each others attempts to setup the ssh tunnels or a set of scripts run by the same user step on each others port lock due to overruns in the run times, etc? Granted you could solve this problem with code, but it seems like a hack to me...
Back to the original point of this post, what is the added security to this approach? Now you have one box (or a set) to go through...what did this buy you? If I can do all the same actions I once could what added security is being employed? Since most of the processes we are talking about here use services accounts to operate none of them are tied to an individual. I agree with the approach for individual users, but for automated processes it doesn't make sense. suggestions?