That’s a rather insensitive take no prisoners kind of title isn’t it?

And yes I’d truly be worried about getting hate mail or hauled off to jail for my intolerance if this post were directed anywhere else but to shit-for-brains liberal arts or business school graduates. (What were you thinking anyway? $80K for a BBA? Yeah, that was certainly not a good business decision.)

So now you’ve parachuted into my world. You’ve gotten your technical certification. You’re now a CCIE, CSSE, or whatever. Yeah. Your certification might contain the word engineer but that doesn’t make you an engineer. It’s the training, stupid.

Let’s summarize shall we? Industry technical certifications are useful for three reasons: One, it gets business degree grads retrained so that they can actually earn more than the minimum wage. Two, businesses in the industries get a double payback. They get an additional revenue stream from selling the classes, the books and the tests. And they also get future employees who are certified to work on their machines. And three, tech certs offers individuals a less costly alternative to going (or returning) to university.

The rub is, you get what you pay for.

And no, I am not saying you won’t make money with your certification; CCIEs make a shitload of money. What I am saying is – you’re not getting the full monty. The real deal. True engineer-class training. In other words, you can’t buy your way into this club. And most certainly not with a certification.

And yeah, I’m biased. But only because I gave blood by putting in 5 hard years of studying and working my ass off to make it through a pretty damn rigorous and unforgiving engineering program. And yeah, it pisses me off when someone who has taken a shortcut route into my industry (via a 6 month tech cert) and then somehow thinks that an engineering title makes them a real engineer.

Well it doesn’t. And the difference is all in the training.

Let me illustrate.

I mentioned a couple of posts ago that I was reading ‘The Intel Trinity: How Robert Noyce, Gordon Moore, and Andy Grove Built the World’s Most Important Company’, written by Michael S. Malone. And while I have bookmarked several important pages, it wasn’t until I got to the chapter on Craig Barrett, covering his single greatest contribution to the company, that it occurred to me that it was also a case study for why non-engineers can’t deliver the same high production values in technology roles.

The back story. In the early ’80s Intel, like every other US integrated-circuit manufacturer, was plagued with low yields. For example, for any given wafer, only 50-60% of the chips would work to spec. That was a huge problem that impacted everything from the ground up including meeting deliveries, to waste, right on up to undermining profit that was desperately needed to fund critical next generation R&D.

Back then there was no real unified fab environment; for example, the Singapore fab did things different than the Portland fab. So the troubleshooting approach was consequently non-organized. And it was a big complicated system. Intel’s fab people took mostly a bottoms up approach wanting to know what was broken; why they weren’t getting the yields. So huge amounts of time and effort were spent sifting through the minutia of large amounts of data but in the end finding nothing useful.

Barrett changed all that. He looked over the fab environment and the first real thing he did was to create a test bed where they would first prove out a fabrication process. Once proved – and here is where the wheels met the proverbial road – he instituted a ‘Copy Exactly’ policy as that process was rolled out to each of Intel’s fabs.

And importantly, he told the fab guys to forget trying to understand what didn’t work and focus instead on what worked (by instituting the practice of copying the proven processes exactly).

That’s kind of common sense isn’t it? No. Actually it isn’t. The fab guys were following what seemed to be a common sense approach – find the problem. That seems logical right up until a cooler head prevailed and stated the obvious; ‘the problem could be problems’ (plural). The fab systems could be seen as a bunch of multi-variables with complex inter-dependencies.

Craig Barrett approached the same situation systemically. First, find out what you know. Put boundaries around things. Isolate everything system by system. Process by process. Test. Reassemble. Test again. Then implement each system process by process.

True that Barrett had to unbolt everything and start over from scratch but still coincidentally in the end, his approach was nothing more than following a version of Deming’s modern method for quality control. (Which ironically was something Detroit turned down but the Japanese embraced; which of course explained their much more acceptable 80% chip yields.)

I joined Intel in ’89 and my first real big project was designing and building the communication systems for Intel’s Sales and Marketing Division. First it started out with me as a temporary loan from my division to Sales and Marketing. I was originally tasked to do only the first 3 new sales offices. After all, I was an infrastructure guy (with a background in small voltage communication systems) as well as a transmission guy.

Anyway, after 3 offices, I was on time and under budget. So 6 weeks turned into 16 months where my role expanded to take in all of the 25 US domestic locations, then Canada, Mexico, Colombia, Brazil and then finally all of Asia.

I wasn’t aware of Barrett’s ‘Copy Exactly’ program in the fabs, but I instituted a similar policy for all of the Sales and Marketing communication rooms across the world. And that is that all of them looked exactly the same; right down to the way cables were run and the wires terminated.

I did that for one reason. Troubleshooting. If I was in Denver and something broke in Montreal then it just made sense that the problem would be faster to locate especially if I had to walk someone in Montreal through troubleshooting something as confusing as the cable plant. My guiding principles were simple: propagate a viable working standard across the planet and document everything.

It wasn’t genius, it was just the smartest way that I could think of keep myself out of future trouble. Simplicity. It was only as far back as 100 years ago when Henry Ford proved he was no fool when the first cars to roll off his assembly line came in one model, one style, and one color.

And as unfriendly as it might sound, it has been my experience over a 25 year plus long career that one can’t expect someone who hasn’t been properly trained in a design/build/test environment to understand a systems approach: modules, subsystems, systems; following standards delivered with working documentation.

AT&T was my equipment/cabling/installation vendor – at least domestically – and some of their old time technicians would still try and insist on doing it the way they’d always done it (in Denver or Portland or New York or where ever). Their rationale was as long as it worked who cared how it was done or what it looked like?

And you know what? That would have been the easiest path. As long as it worked, right? Who other than myself would care what it looked like? Who would have known other than me? And on the front end – the installation phase – it was harder work, especially for me as I sometimes had to get AT&T to do an office all over again.

And when that happened, AT&T program management typically went behind my back and bitched to my program manager who – when the first time they bitched to me – I was to remind them to look at the agreement (contract).

And then something like 15 offices into the project and everyone involved – the customer, my management, and finally even AT&T – said, “They (the comm rooms) all look the same!” And with that collective sigh everyone realized that the bit of pain for conformity and the few additional dollars spent doing do-overs on the front end was going to save everyone time and money on the back end. And future support costs, including maintenance, became something more tangible.

Then there was the less measurable benefit of maximizing uptime; a real time asset that was way more important to the support people within that international division who operated around the clock from 50 different locations including 3 in India.

Note: After the Sales and Marketing project wrapped up I went on to do similar work for some of Intel’s factories in Asia. With that I changed groups within Intel and shortly after arriving in my new group I got pulled into troubleshooting an intermittent (the worse kind) network outage at one of the factories near Seoul, Korea.

We spent hours on the phone talking to the IT guys at the factory. And hours and many conference calls with the Korean Telecom authority. Until finally my boss told me to go out there, find the problem and get the damn thing fixed. And the cause was so unbelievably stupid that it turned out to have been one of the easiest troubleshooting problems of my life.

Who knows how many tens or hundreds of thousands of dollars had been lost in productivity to an improperly terminated cable? I found it the first day. The telecom provided [fractional] T1 cable had literally been stabbed through a broken window (that was my first clue) and was dangling (and then not tightened down), hence making only intermittent contact with the DSU (the main landing point for the international data circuit). Intermittent contact equals intermittent problem.

Consistency, precision, reproducibility: all critical engineering lexicon.

PS – I ran a lab for Intel for a couple of years where we tested new comm hardware and software. Anytime anything got tested it was, ‘show me the results’. Where I was university trained, in that ‘if you didn’t write it down’ (the results), ‘you didn’t do it’. And the results, if they were written down, had better be coherent to the next person. And if need be, you had better be able to reproduce the results. And if you can’t, then that was demonstrable proof that you had better rethink your methodology.

PPS – This whole post is just another example of why I hate business majors and other non-technical ilk – with lame industry certifications – working in technology roles.
It’s not just wrong. It’s stupid.