I worked for the U.S. Federal Government. I worked on a number of networks world wide – sort of. Here are some must know things working as a network engineer in the U.S. Federal Government.
1. You may not be hired for what you think you’ll be hired for.
- The Government’s hiring practices are sordid at best. There are so many ridiculous checks and balances that the hiring time for a typical position can take up to six months. The problem with that is that projects will not just stop waiting for government employee to be hired. A contractor, however, can be brought on immediately. By the time a government employee is brought on, the project may be happy with the contractor and then there is no funding for you. Turn over rate for government employees on projects is so high that no one may even know who your are or what you’re doing there when you show up.
2. Your team’s skill sets are laughably lacking.
- The government is directed to fill quotas in order of: “Disabled People” , “Veterans” , and “African Americans”. Then there is an extremely small and competitive group of positions available where UVA and Columbia graduates get passed over much less your everyday joe. These 3 quotas will be filled regardless of skill set. A person was brought on as a PERL and C# programmer for one project I was working on. Our team had no part in the interview process. The person had also never heard of PERL or C#. In fact that individual had never coded a thing. Until the day I left he sat at a desk. That was his role. I’ll never know why I was hired although I was considered a ‘guru’
3. Projects can be massive. A Government engineer’s part in the project is likely minuscule at best.
- There are plenty of massive projects in the U.S. Government. Projects are classified as CATI, II III depending on the funding. These projects are massive because of the funding that the program within the government can acquire. People will be brought on to the project to use the funding, whether they are needed or not. One project I supported involved standing up two HP C7000 blade centers and connecting to the local LAN. 11 Network Engineers, 8 Linux Engineers, 4 WIndows Engineers and 50 DBAs were hired. Most of the people just hung out all day for a year making engineer pay.
4. Federal Government Employees do not work
- A government employee does not work. In fact, a government employee goes through months of training learning how not to work. This training, called DAWIA, is utterly ridiculous. What’s more ridiculous is that many claim its better to have DAWIA training than an MA/MS degree. The training focuses around ‘managing’ contractors and their work. Although I was a network engineer assigned to a few projects that needed someone who knew what a router was, In this training I had to stand up in front of a ‘class’ of 50 people and explain why a simulated project on drone building was over budget.
5. You wont get tools you may need
- Key board? Monitor? Laptop? Cell Phone? Pens? Notepads? Screwdriver? Desk? You wont get any of these…..not without a major fight and then you’ll probably see them after six months. The 30 Rock scene with Alec Baldwin working on Capital Hill where Matthew Broderik writes notes down with a sharp piece of plastic is spot on. Then again – read reason four above.
6. You’ll get awesome world class travel
- To sit at a desk and do nothing once you get to – Iraq, Italy, Kuwait, Japan. I went to each site as ‘government over-site’ for contractors. The four contractors sent to each of these sites had to install a switch – 3750E if I remember.
7. It is all meaningless
- Most federal contracts worth working on – CATI projects such as the F35 – are so full of bureaucracy and politics (think some government guy sitting at his desk doing nothing for a few weeks before responding to an email) that they take years to ramp up. Attaching to a new CATI project means you need to be in it for the long haul. As a network engineer you many only be installing a handful of routers and switches every two years – after two years of planning. This is so great for people who like job security, don’t care about being relevant in the industry, and want to surf the internet all day.
8. Everything moves extremely slowly
-It is hard to imagine how slowly things move. I’ve worked for banks, corporate networks, and food industries. Things moved slow there. It could take a few weeks to get something I needed ordered for a project. The federal government, on the other hand, can take years to order something. By the time equipment arrives for many projects in the federal government, it is already antiquated. Imagine ordering a version of Vmware 4.X today. This is happening! Imagine ordering an end of sale 3560 switch. This is happening. By the time equipment shows up it is out dated and no one wants anything to do with it.
But that’s not the only thing that moves slowly. Anything you need done for yourself can take a considerably long amount of time. Get a copy of Visio? Get travel approved? Take a day off? Take care of a sick child? These things can take weeks to months to accomplish because only other government employees are taking care of your needs. No one works in the government and so you have to break the door down on people to get anything done. There’s a scene in Catch-22 where John Yossarian has to tackle his boss to get him to sign paperwork. That is not satire! It’s total reality.
9. Everyone is extremely rude
-No one works in the government. And so asking someone to do something is an affront. But you eventually have to do something (that year). Human interaction in the federal government is gross. No one wants to do anything and people in personnel support positions are some of the most awful I’ve ever met. People on your team can be just as bad – contractors are hired, fired, and laid off daily. They hate you – but have to keep you happy to keep them in employ. That in itself is fun and awful at the same time.
10. It’s the perfect job if you want no responsibilities, less worry of being laid off than an average company, stability, 9-5, 0 real stress, no real career growth (in the network industry), chose your own schedule, to sleep all day, to surf the internet all day, and to basically relax in life. I’ll absolutely be coming back to the government (when I’m 50 [and i'll be a total asshole to everyone]).
I was put into a situation recently where a bunch of Flex-10 switches were dumped in my lap. The Flex10 switches are HP devices that plug into the backplane of a HP C7000 BladeCenter to provide external and internal network connectivity to the blade servers within the chassis. I’m most familiar with Cisco blade switches such as 3130s, 3030s, and B22. The Flex10 switches were foreign to me.
This post covers my own personal opinion and findings after deploying the equipment with little (or none really) support from HP Flex expert.
Here’s a quick diagram.
We have a HPC7000 blade chassis with four server blades. We have four Flex10 switches. That’s pretty normal to me in big box blade systems. What I’m not used to is the recommended cabling. The above looks insane to me. I would never cable IOS-SE or NX-OS switches like this.
The Flex10/10D switches are stacked lengthwise – from Slot1 to Slot5 and from Slot 2 to Slot 6. They are also stacked on the backplane crosswise, from slot 1 to 2 and 5 to 6. These stack cables are for HP….not for us. They do…stuff. They do not switch though according to documentation(or do they?). So this stack is in no way similar to the stacking cables on a Cisco 3750 or a 3130. You CANNOT run MLAG even if the switches are stacked(which they have to be).
Ok so..uhhhh the recommended cabling in a four switch setup is to run a port-channel/LAG to slot 1 and then another to slot 6. What? Why? I dont know and cannot find a reason other than that the switching will not work if you do not follow this. Take a look at the recommendation for cabling up two chassis. It seems wildly dangerous to me.
Now – looking at this cabling – my first thought is why do I need four switches then? I only have two uplink pairs. Since I have to stack everything, it probably cant be for A/B redundancy. Perhaps it has to do with redundancy during upgrades? (Nope – I had to reboot the entire blade chassis after upgrading the switches). Yikes! The only reason I can see is to offer more NICs to the blade servers.
A blade server gets its NICs by use of built-in backplane connections. A typical blade server will be able to connect to LAN1 which comprises bays 1 and 2 by default out of the box. This means that when you log into the server you’ll see two NICs connected – one to switch1 in bay1 and one to switch2 in bay2. Another device, called a mezzanine card, is required to connect to the another pair of slots. So our blade servers above have one mezzanine card installed that is compatible with the Flex10 switches. This means each server gets four physical uplinks.
Is that enough? Many VMWare best practices say that a host requires DATA, Storage, Fault Tolerance, VMotion, and Management NICs (and they should probably have redundancy). So that’s 10 NICs required per host. I do not have enough NICs with the physical capabilities above. I can however enable to the virtualization feature of the Flex10 switches. This allows me to configure up to four NICs per switch to each blade for a total of 16 in the above diagram. In otherwords, the blade will see 16 physical NICs via the four uplinks across the four bays. Just remember this is all in the backplane, not by use of the ports you see on the picture above.
Now – the configuration for the Flex10 switches. There are two options – GUI and CLI. It’s tough finding good documentation for the CLI. I tried starting off on the CLI but ran into way too many issues. Most of these stemmed from the fact that the GUI is the preferred method for initial config due to the fact that many changes you make to the switch requires the blade server to be powered off. (EGAD!!!). I’ve never had to power off a blade in my data center history while configuring switches. Not even with UCS. What’s the deal?
So it is pretty easy to set up once you know the basics.
-You have to configure a server profile for each Bay or blade.
-You have to configure your Shared Uplink sets for your uplinks that connect to your top of rack.
-If you have fibre channel you need to configure your SAN fabrics
So to start with Shared uplink sets I found that you’ll want to envision an MLAG that you wanted originally and add all the ports to a single uplink set. This will facilitate redundancy in the event a switch is lost. It is not a true mlag because you’ll still only configure a LAG(Port channel) per switch. In my diagram above I have configured a port-chanel on each switch to an upstream MLAG on the 5Ks. I then take both of these port-channels, the one in bay 1 and the one in bay 2 and assign them to a shared uplink set. If I do not share them I’ll run into issues with sharing vlans (you cant in the sense that you think you would) across the switches. If you do not configure this way, then you have to identify each vlan differently and these identified vlans (say vlan10a and vlan10b) cannot be both assigned to a server profile, meaning that you lose connectivity if you lose a switch. So make one shared uplink set in whose interfaces you want to carry a vlan. So my design here will have one shared uplinkset carrying all vlans. This will allow for redundancy and all uplinks to be used.
So add a new shared uplink set and give it a name.
Add the interfaces in the above diagram. You can enable LLDP on your upstream switch to see some neighbor information. CDP is not a feature here.
Finally add all the vlans you want.
This whole network name, vlan-id, and vlan-id is bizarre to me. I can only guess is that it is one of the ways that Flex prevents loops by having boatloads of interconnect cabling and no STP. The network name/id vneta will show up for all your vlans. You can do odd things with these in a multitenency environment as well as control failover. Here this is just a normal deployment so the name and id do not really matter. You’re limited to something like 300 Vlans and so you have to TYPE IN ALL YOUR VLANS MANUALLY. MUHAHAHAHAHHAHAH. I had like 250. So that sucked. It gets even worse with the server port profiles. So we’re done here, on to the server port profiles.
There are some ways to automate this but they were to buggy and did not work for me. I had to make all of them individually – In other words create a template, copy it, and assign it to each Bay individually.
So add a new one.
I just named them based on the bay I’ll assign. Add your ehternet adapters. This is where things get funky. Recall I said with one mez card and four switches each blade gets four physical uplinks. The Flex switches allow for physical nic virtualization so we can present four nics per switch to the blade. The OS(ESX) will see four physical NICs attached. All traffic will traverse the single real physical nic we are virtualizing with Flex. Most big box servers have similar features. What’s different here is that you cannot pick and chose!! GAH AGAIN! You get what you get. So you have a few Ethernet adapters assigned at first. As you add in your template the virtual connect will assign round robin. So you get a nic on one switch, then another, and so on. It’s confusing. You’ll have to template it out, assign to bay, and then go into your ESX host and map your NICs to ensure you add the right Vlans to the correct downstream NICs. So in order to get the 10 nics we need, redundantly, we HAVE to add all 16 NICs! So instead of using 10 NICS I used 16 because—why not? In the end it should look something like above.
Now, after you go into ESX and map your NICs on your host to your switch you can assign your vlans to your host NICs. So what we’re doing here now is configuring the NICs on the Flex switches that connect to the servers. Before, on the shared uplink sets, we configured the NICs on the Flex switches that connect to the top of rack. So figure out your mapping (very important) and assign your vlans. Oh news flash – upgrading from ESX 5.0 to 5.5 CHANGES THE MAPPING! So add the same Vlans from your shared uplink set:
So…this sucks. Try adding all 300 and then remember you ahve to do this for all 16 bays and then try having to update a few vlans later. This is tedious and where the CLI is helpful.
Then make sure you have assigned to the right bay.
You will not see the uplink data until you assign the profile to the bay. So you’ll have to template it out, and perform quite a bit of trial and error (at least I did).
And that is it. We’re done. Flex10d networking…..another checkbox I guess.
I’m sure I’ve done quite a bit wrong in the above. This is how I could get things to work. No where did I find that it is best practices to put all uplinks into a single shared uplink. Rather I found you should split them. However, you need to use smartlink, and failure senarios have to happen ‘just so’. I’d appreciate any comments from anyone who sees any glaring mistakes. I also hope this helps you, I could only find one good blog – http://hongjunma.wordpress.com/ and i recommend reading his work.
I’ve been harsh but I can say that I think I can see where HP is going. I’m sure they get lots of complaints from server group on losing control of their blade center to the networking folks. I can easily see this solution moving to the server group. Configuring this environment requires almost no understanding of network fundamentals. In fact – I think they are straight out the door! Flex connects everything and throws uplinks everywhere on everything. There’s no way not to get something working after clicking a few buttons. Hell I did it!
I may be late to the game on this complaint as I’m sure I cannot be the only one. I had to do some campus work the other day and the local guys asked me to bring some standard PDU power cables to get this switch installed. I have boxes and boxes of C13/C14 cables. I brought a few but when I arrived—- BAM – C15.
For the past 10 years I’ve only used C13/C14 or C19/C20. I’ve also used the standard american NEMA 5-15 for campus. I’ve never seen C15 before. It’s essentially a C13 connector with a grooved in the middle. Why? Well the C15 cables are rated for higher temps. I guess I can understand this due to lots of campus closets not having HVAC. But I’ve never seen it! In my opinion it should have been clearly spelled out on their website. It’s there, obviously, I double checked. But this should be a caveat in my opinion.
The switch comes with a NEMA5 end and a C15 end. Most of my deployments these days use PDUs that require C14. Would you like to know how hard it is to go quickly pick up some C14/C15 cables? I couldn’t do it. Greybar, Frys, RadioShack, Bestbuy, Lowes, and various local electronic stores did not have it. I had to overnight them. I (well the business) ended up paying $350 for two power cables.
So my mistake? Yes, definitely. And I clearly owned up to it. I should have checked all the requirements. But wow, I would have never expected this. I almost feel like I had a joke played on me.
Cisco Catalyst 3850 Switch
— Deprecates the 3750.
3750, we hardly knew ye. The 3750 was possibly the coolest switch ever introduced for the data closet and beyond. It could STACK! That was pretty cool. It was a new color. That was pretty cool. It could route. That was pretty cool.
The 3850 is an enhancement to the 3750. It does POE, routes, stacks, all of that. It also supports flexible netflow and it can be a WLC and support a number of APs inherently. Like any new switch that deprecates an old one it has higher throughput, bigger buffers, larger TCAMs, etc.
I’m curious on the WLC support. It took me years to build a wirless infrastructure where the AP was its own WLC. It took me years to consolidate site WLCs(ap integrated) into a single box at each site. It took me years to consolidate site WLCs to a single box at the data center. Is Cisco taking a new approach here? Are we going to forget the tunnels (they suck anyway) and start using WLCs per switch port? I’m keeping a close eye on things. I’d love to get rid of those damn tunnels to every single site.
I also have an order going out soon for a handful of 3750s. Is this the time to change to 3850s? My left says no – give it a few months :: BUGS! My right says yes, new toys!
Cisco Nexus 6004 and 6001
—Deprecates the 5500 and 5000 series switches if not the 7Ks.
The weight of the 6004 is 120 pounds! That’s so heavy they specifically call it out on its own section in the data sheet. Holy Backplane Batman! It’s a 4U monster. The 6004 has a large number of 40GbE (48 without add on modules) ports and eventually 100GbE ports. It can support many more FEX than the 5500. The 6001 is a 48 port 10GbE switch with 40GbE uplinks. Let no one say that there is not enough bandwidth in a Corporate Data Center. Also new to me is the 40Gb Copper Twinax non-breakout cable. Like any new switch that deprecates an old one it has higher throughput, bigger buffers, larger TCAMs, etc. I’m looking forward to the new FEXes that will come along with these.
I’ll have a buy going out in the next four months or so for some additional 5548s. This is a dilemma. Comparison time.
Cisco Nexus B22 FEX for Dell M1000E
Deprecates the 3130
—This may be old news but has just hit my radar. I was aware of the FEX for my C7000 series chassis but not for my M1000. I really did not expect these to ever come out. I’m happy to see them. These devices are of course switch blades (well FEXes) to install in Dell’s flagship M1000E product. Prior to these you had to use Dell Force10? integrated switches for 10GbE connectivity to the servers. Like any new switch that deprecates an old one it has higher throughput, bigger buffers, etc.
I’m moving to UCS hardware so this is just something cool for me to know in case we want to upgrade our older M1000E chassis.
Please read the pasted URLs for detailed, real, information on the products.
I love books. I’ve been hooked on fiction ever since reading my first novel “The Girl of the Sea of Cortez” by Peter Benchley (who also wrote Jaws). Growing up since then I had amassed closets full of books from thrillers, to SCI-FI, to horror, to fantasy. I’d travel to the book store after a sporting event (swimming, baseball, and cross-country) with my father every weekend and get to browse Borders book store and choose a handful of books. One of my fondest memories reading at a young age was me curled up under the covers in my bed at night, 11 years old, reading “Sphere” by Michael Crichton and being more scared as S*** reading(hearing) that alien voice screaming at me in all CAPS.
Choosing a book involved seeing how many pages it had, reading the back cover, and often judging it by its front cover picture. The price usually seemed to align with the number of pages in the book. A tome like the unabridged Stephen King’s “The Stand” would cost more than a quick and easy such as Kurt Vonnegut’s “Slaughterhouse 5.”
Looking at the book allowed me to judge fairly easily how long it would take to read. It also gave me some satisfaction, in a good book, in knowing that there would be ‘that much more’ story before the end of the novel or that much dissatisfaction with how much less of the novel there will be.
A year or two ago I bought my first eReader – Kindle fire. It has completely changed my reading experience, some for the good and some for the bad. I’ve not been in a book store since receiving the Kindle. The Kindle has made my library all but meaningless. Browsing for new books is so much easier (I can see reviews, I can trial the first 50 or so pages of the book at my own leisure, I can write reviews, and there’s no one to bump into me or excuse their selves in front of me while doing it). All this but I still really miss the conventional book store. I miss Borders (which has closed), I miss the old used book store down the street (which has closed), and more importantly I miss the feel of books.
eBooks have made page numbers irrelevant to me. They are absolutely meaningless. I just recently started reading a fantasy series called the “The Dresden Files” by Jim Butcher. I needed something easy to read since “Under the Volcano.” These books were fun. My Kindle showed that they had 4000 pages. They cost $9.99 each. That’s how much a typical book costs!! I felt ripped off. These books take at most a few hours to read. They are what I feel the fiction novel industry is moving to. You do not know how many pages the book is and so the eBook industry can flippantly charge whatever they want and the buyer will be none the wiser.
The “Dresden Files” have pissed me off! Something is just not on the level when I pay ten bucks for a short story. Wait…was it a short story? I dont know!!! The page number mean nothing to me. “Game of Thrones” as I recall seemed to have 30K pages on my Kindle. That means that Dresden was an eight of the length. It certainly felt like a short story and I may even plan on traveling to Barnes and Nobel s just to put my hands on one of these novels.
eReaders need to change how the length of the book is presented. They need to judge your reading scale and then base the ‘length’ of books off of the estimated time it takes to read the book. Not some meaningless digital page number that’s different every time for some reason. The length of time to read a book (ie you have two hours left from whatever point you are at in the text) would be very helpful to me. I’d see a six hour book “Battlefield Earth (possibly the best book ever written for men)” for sale at, say $13 and then a one hour book “Dresden Files” for sale at, say $3.
Don’t get me wrong – I really enjoyed the Dresden Files. It is a goofy cross between Lehane’s ‘Patrick Kenzie’ and Pratchett’s ‘Rincewind’. I really want to read the rest of the books in the series but at $10 bucks a pop I’ll end up spending $100 this week just to read them. Is that fair? It cant be. It should be $9.99 for the first three novels maybe.
And I wont even go into quality over quantity. Books have never worked that way. Le Morte D’Arthur (possibly the most boring book written since the Bible) costed the same as Harry Potter at Borders. It’s not time to start saying that a 4000 page ebook costs $10 because it’s ‘better’ than that 30000 page ebook. It’s a sham. eBooks should be significantly cheaper. All I see now is the creep. I bet eBooks will actually cost MORE than a conventional book in less than a year. All they have to do is tell you that you save money (even if you didnt).
Ok rant complete. Read Dresden, I guess, if you can afford it.
I’m a Cisco ICON wh****. I like them as they are notional and fit well in my diagrams.
Anyway, looks like they updated the icons to include the Identity Services Engine, which is becoming prevalent in my organization. Oddly enough this appears to be the only update in nearly two years. I just hope I’m not missing some goodies by only using the power point.
(I use the power point icons vs the visio – mostly because I cant tell what it looks like in visio until i drag it over)
They have a blue and purple coloring scheme. I’m not sure why but I guess I need to start studying ISE as it is my guess it will replace ACS in the near future.
Bad – but only because I suck at this job.
Take a look at Cisco’s firewall offering. You’ll find the 5512, 5515, 5525, 5545, 5555, and 5585. Six appliances to choose from! And their capabilities slowy ramp up as the model number increases. See below:
On the surface here this seems like it could be beneficial. Let’s say you have a remote site with an Ethernet WAN circuit that provides 100Mb but can scale as needed to 1Gb and you only need firewalling. Obviously you probably only need a 5512.
What happens though, if you connect a number of servers to a separate DMZ and they each have their own vault? That means that the throughput becomes all guess work now. Sure you could get some traffic analysis done assuming you have the time, tools, servers, and support, but that would still just be estimated (and probably poorly if you do not use a very expensive network modeling tool like Guru).
This is why I suck. Most servers are 10Gb now and I just do not know what firewall I should use. So do I leave it up to the customer? Do I say that, well your non enterprise servers at that remote site or in that test building are all 10Gb connected but they probably will never really use that bandwidth and so we can include a bottle neck in the design.
Do I write these down on notepad, throw them in a hat, and select at random? Really – who knows if you need 1, 1.2, or 2 Gbps. Who knows if they need 200 or 250 or 300 Mbs VPN throughput?
Do I tell the customer to buy cheap and then if it doesnt work to their satisfaction, buy expensive later and use the cheaper firewalls on another project?
Do I buy an expensive network analysis suite like Guru and pleade that the systems guys prebuild the environment so I can gauge?
Or do I just say screw it and buy the 5555X and state that will STILL be a bottle neck for 10Gb servers.
Or do I plead with the security group to allow different level security vaults to circumvent the bottleneck firewall?
Do I tell the customer that 10Gb only belongs in an enterprise data center and so one-off solutions will be configured and so have bottle necks or be inordinately expensive? And what if they say sure we understand – which firewall are you going to use? It is still like pulling out of a hat.
What if I get real and just take a look at the average throughput of 10Gb servers (and find that their throughput is somewhere around the lines of 1Gbs with occasional spikes)? Well that seems to show me that I can go with the 5512 or 5515. But I dont feel safe with that – especially on high visibility projects (they all are these days right?).
I just do not see myself ever ordering the 5515 or 5545.
Now – If there was just one chassis, say the 5500, and this chassis was field upgradable – ie you could turn a 5512 into a 5515 and then into a 5525 and so on with a cheap RAM or processor upgrade, I would feel much safer saving money on the low end appliance.