Saturday, September 18, 2021

What I Learned from a Waterfall Project Failure

In my most recent post on lessons learned in my career, I covered my time as an IT employee at Smith Tool. I learned so much in those years, and I need another few posts to cover some of the more interesting lessons.

By 1980, my work in manufacturing systems had been all in machining operations. Now Rodger gave me a new assignment, to develop a new system to support Smith’s forge plant. This opportunity took me upstream into the forge, which I was told was the largest forge in the United States west of the Mississippi1.

As I noted in the previous post, I loved the nitty-gritty sights and sounds of the metalworking plant. But the forge was another whole level of physicality, almost violent. As you approached the forge, you could hear the hammer and feel the ground shake as the press hammered out parts2. And outside the plant were pallets of newly forged parts, still red hot to the point where you could not stand closer than 20 or 30 yards without feeling the heat.

Here is a good video of a forge plant that gives a sense for what it’s like to be inside one. The press in this video is smaller than the two at Smith. Also, our forge plant was more modern and the parts being forged in the video are different, but the sights and sounds are the same.

Forging Operations Are Tricky to Schedule

Smith used a process called closed die forging. This means that the red-hot steel bar would be pressed into a die in the shape of the finished part. The tricky part is that the die would only be able to produce a given number of forgings before it had to be sent out for “resinking,” a sharpening operation, so that they could produce more forgings. The production scheduler used manual log books to keep track of how much life was left in each die and to know which dies had enough life left to fill an order. But if the log books were not updated correctly, the forge might not be able to meet its production schedule. This was the process the new system would automate, with the benefits of being able to better meet production schedules.

The new system was described in a company newsletter after the project was (supposedly) completed.
Simply stated, the Forge Die Tracking System keeps track of over 3,000 components that make up the 300 forge dies used to stamp out the various forgings to make Smith Tool’s products. Additionally, the system makes the die selections to fill each forge order according to which dies most closely match in wear and on the basis of how much life the dies have left. When the system notes that a die’s life is getting low, it will suggest to the scheduler that the die be used to make as many forging as is left in it to make, and then be sent out for resinking. So, while the order is being filled, the excess forgings being produced will go into inventory as safety stock.
I was assigned as project manager and lead analyst six months into the project, which had stalled for lack of someone who could develop the calculations for picking the best set of dies for a given order, suggesting the optimum production quantity to “run out” the die, and to track dies sent out for resinking.

I took it as a challenge. I remember distinctly that this was taking place during the run up to the US 1980 presidential election. So, I wrote my programming specifications with references to identifying candidate dies, nominating them, and then electing the best one. Although I had three programmers reporting to me for this project, I wrote some of that core scheduling logic myself.

The New System Installed but Not Implemented

So, what went wrong? There is a little hint of the problem in the final paragraph of that newsletter story on the project:
All the die structures will be on the system by the end of November. The history of all those components will be on by mid-December. The Forge personnel are enthusiastically anticipating the ease and efficiency the system will bring to the Forge operation. [Emphasis added]
Note the future tense.

Here’s how it went down. We put the system into production, and each week I would follow up with the director of forge operations to see how they were coming along with loading the die information. He had assigned this job to his administrative assistant, but the director told me she was too busy to get to it. After several weeks of follow up, it was clear that they had no intention of loading the master file data that the system would need to start scheduling the forge. The system became a very expensive piece of shelfware. I don’t recall how long they let the system continue to run in production without any data to process. But I was soon assigned to another development project.

So, how did this project get approved in the first place? Later I found out that the forge director felt the IT department was spending all its time developing systems for the machining plant, and that it was “his turn” for a new system. The IT steering committee complied.

Lesson Learned: Test User Commitment through Phased Implementation. This was my first real experience with the drawbacks of a waterfall development approach, where you define all the requirements up front, then design the entire system, then program it, test it, and deploy it. In this case, the users were happy to meet with us to provide their requirements and review our system design. But in terms of actually doing any real work, the users were off the hook until we put the system into production. At that point, they were not willing to let a low-level administrative assistant spend the time to do the necessary data entry, or hire a temp worker to backfill her regular duties so she could do so.

After that I vowed never again. What we should have done is build the database and then have the users enter the master file data before we invested more time programming the scheduling logic—the really difficult part, where the bulk of the development hours would go. That would have tested the users’ commitment and saved several months of wasted effort. This was 20 years before the Agile Manifesto, but my software engineering courses at UCI had already taught me about Barry Boehm’s spiral development methodology, which in many ways anticipated Agile. If only I had the foresight and permission to take this approach.


Postscript: In reviewing a draft of this post, the department manager at the time, Rodger Beard, has further recollections. He writes (lightly edited):
I felt at the time and actually still feel a sense of personal failure for allowing this project to unfold the way it did. Exactly the way you've described. A painful recognition, over many months, that good work was being completely wasted.

This was my first experience with having pure politics result in a significant waste of IT resources. However, like you, I learned from it. An aside, at the time, I also felt this system was not well conceived. If it were well deployed, it could have had an excellent ROI. But with the caveat that additional, easy-to-avoid ongoing human capital investment would be necessary to make it pay off. A red flag really.

I think you're spot on regarding how the requirement should have been addressed. This was definitely mine (especially) as well as [name withheld]’s leadership error. We knew it was an "it's my turn" situation. [Our leaders] had decided to throw the forge a bone to shut up the forge director. If only I had had the foresight to ask you to do what your article says should have been done. Sigh.
Thanks again, Rodger, for your confirmation in this series.   

End Notes

1The forge was about 200 yards from the Smith Tool metalworking on Von Karmen Avenue in what used to be known as the Irvine Industrial Complex. Driving through that area today, now known as the Irvine Business Complex, it’s hard to believe there was such heavy manufacturing there into the 1980s. That part of Irvine is now mostly commercial offices and some distribution or light manufacturing facilities. The metalworking plant today is an Amazon distribution center. 

2The purpose of forging is to form the raw material into a part that has dimensions close to what is needed so that the first machining operation only needs to remove a minimum amount of metal. Forging also improves the metallurgical properties of the material. It is the same process as the ancient blacksmith employed with his hammer to make metal implements, such as horseshoes. In fact, Smith Tool began as a blacksmith shop in 1902 in Whittier, CA.

Photo Credit

Waterfall. The original uploader was PaulHoadley at English Wikipedia., CC BY-SA 2.5, via Wikimedia Commons

Saturday, August 28, 2021

What I Learned at Smith Tool about Enterprise IT

This post continues my series looking back to lessons learned in my career, which started in 1974 at Macy’s headquarters in Manhattan and continued at TRW Credit Data in California in 1976. This post takes me to the next step of my journey.

As noted in the first post, my goal is not just talk about how technology has changed. Everyone knows that. As incredible as those changes have been over my nearly half-century in the business, it is also fascinating how many things have not changed. Many of the lessons learned still apply today. That’s my focus.

Getting Restless

Although TRW was a great learning experience, I only stayed there for about 18 months. I was getting bored with accounting systems, and I was looking for something where I could continue to develop new skills. I read in Computerworld that manufacturing systems were the next big thing, so in 1978 I started another job hunt.

One of my interviews was with Smith Tool, a division of Smith International, an oil tools manufacturer in Irvine and, at the time, the third-largest employer in Orange County1. They made me an offer, and I accepted. In addition to being able to break into manufacturing systems, the fact that I might be able to somehow apply my degree in geology was also attractive2.

Lesson Learned: Take Charge of Your Career. In the 1970s, our elders commonly advised us that the best way to get ahead was to settle down in a large company and stay for decades. Although that worked for some of my peers, it never resonated with me. I never waited for opportunities to come to me. I would rather be proactive and pursue new directions. For young people, today is no different. Always be thinking about what you need to continue your career development. If your current employer can give you that, great. If not, look elsewhere.

Thrown into the Deep End

The entire IT department at Smith Tool was about 30 people, with 15 of us in application development3. The company’s systems ran on two IBM mainframes. The manager of the applications group was Rodger Beard, and he assigned me to the supervisor of manufacturing systems, Ken Ruiz.

On my first day, Ken sat me down to give me a primer on manufacturing systems. I was totally clueless. Using a white board, he explained the concept of part masters, which defined inventory items—whether finished products, intermediate assemblies, or purchased parts. He also explained product structure records, which defined the relationship between part masters to form bills of material. The capabilities to manage these relationships required a special type of IBM database, known as BOMP (Bill of Materials Processor) and the newer DBOMP (Database Organization and Maintenance Processor). He then went on to explain work centers and routings, which define manufacturing processes.

Later that day we went out to lunch. I drove, with Ken and two other co-workers in the car chatting about something called “whip.” Finally, I asked, what is “whip?” Work-in-Process (WIP) was the answer. As I said, I was clueless4.

The next day, Ken gave me my first assignment—to customize and implement the MRP module of COPICS5, which was written in IBM assembly language. This was my first encounter with any type of packaged software, although it wasn’t much more than collection of assembler source code known as an IBM Field Developed Program, which just meant that an IBM-er wrote it for some customer and now it was available to others. The IBM engineer who wrote it just happened to be assigned to Smith Tool. His name was Roly White6.

My assembler skills were a bit rusty since I had only used them sporadically at Macy’s two years prior. But within a few weeks I had made the necessary modifications and even found a bug in Roly’s code.

I absolutely loved manufacturing systems. They were so much more interesting than accounting systems. I looked forward to going up into the plant mezzanine for meetings with users. I would don my Red Wing steel toed shoes, safety glasses, and ear plugs, and take my time getting to and from the meeting so I could stand and watch row after row of multi-axis milling machines or modern CNC machines throwing metal chips on the floor7.

Not Smith Tool, but similar, and much smaller. 
My crash course in manufacturing was not limited to on-the-job training. Within a few weeks, Ken mentioned the monthly dinner meetings of Orange County APICS, the American Production and Inventory Control Society (recently renamed as the Association for Supply Chain Management, ASCM). About half of our department would attend each month, and the dinner meetings drew several hundred people. The legendary George Plossl spoke at one meeting and visited us at Smith Tool the next day, where he fielded questions. (I remember one on how to plan capacity for a heat treat operation.) I also enrolled in a four-course series of APICS certification classes at night at Cal State Fullerton, where we learned principles of inventory management, MRP, master scheduling, statistical forecasting, and other basic concepts. My favorite class was the final one, taught by Nick Testa, which tied everything together. Nick went on to become President of APICS International in 2006 and the chair of the APICS international conference8.


Within two years, I was APICS-certified at the Fellow Level, a real accomplishment for someone who only two years prior didn’t know what MRP or WIP stood for.

Lesson Learned: Build Your Industry Experience. If you are going to be an expert in business applications, you need to build your industry credentials. For IT infrastructure, this is not as critical a requirement. But when it comes to applications, most employers favor those with industry specialization. This is even more critical for consultants. If you have experience in ERP systems for manufacturing companies, for example, that doesn’t translate to ERP in charitable organizations or hospitals. In the course of my career, I gained experience and credentials in several manufacturing sub-sectors, such as medical devices, pharmaceuticals, food manufacturing, and high-tech, among others. This doesn’t mean you can’t specialize in multiple industries, but don’t try to be a jack-of-all-trades.

Moving to the Front End of the Software Development Life-Cycle

SDM-70 Methodology Schematics
SDM/70
As noted in my earlier post, I was formally trained in software engineering during my time at TRW, mostly around system design, development, and testing. Soon after I arrived at Smith, the department standardized on an SDLC methodology called SDM/70. It consisted of about four feet of three-ring binders, with forms and instructions for each phase and step of the development process, from initial feasibility studies all the way to go-live and ongoing maintenance. Rodger made me the department coordinator.

I understood the system design and development phases, but what really interested me was the earlier stages, like system requirements and especially business requirements. I wanted to get involved earlier and earlier in new projects, even to the point of helping decide whether a new system was even feasible, or whether there was a business case for it.

Over the next five years, I led a number of interesting projects, with several important lessons learned—both positive and negative. But the most interesting project of all was a Bit Record Database system, where I led the development of a new system to track the current and historic downhole performance of drill bits in the field. This system would become a key focus of my career direction over the next eight years. And, as I recently discovered, it was a strategic initiative for the company at the highest levels.

Lesson Learned: Pick a Focus. When it comes to enterprise IT, figure out where your interests really lie. No one can specialize in every aspect. Some of my coworkers went deep into coding, others enjoyed project management, others pursued a management path, while others, like me, wanted to get close to the business. This was another indication of where my career would be headed in the coming years, which included my leaving Smith’s IT department altogether and moving into the business itself. More on that in a future post.

How I Almost Missed This Career Opportunity

My years at Smith Tool were my greatest period of professional development at this point in my career. But I came very close to missing out. Here’s the story. My first interview was with a low-level HR representative, who I arranged to screen me during my lunch hour. I was very eager to get past her and on to the hiring manager. Unfortunately, she kept me waiting for nearly an hour. When I finally got in to see her, my annoyance was visible and she decided not to pass me on for the next interview. After two or three weeks, my recruiter went back and somehow convinced the HR group to interview me again. This time, I behaved myself and got passed on to Rodger. I got the job. But, without this second chance, my career could have been much different.

Lesson Learned: Respect People at Every Level. Everyone deserves tolerance and respect, from the front desk receptionist, to the warehouse worker, to the CEO. Moreover, you never know who has influence, and who can make or break your career. And, as my wife, Dorothy, points out, this lesson applies in all areas of life, not just the workplace.

But now I have discovered there is another angle to this story. After reviewing the first draft of this post, Rodger gave me some new perspective from the other side of the desk, so to speak. He writes that his memory around how I was hired (or nearly not hired) is somewhat different than mine. He writes:

As was often the case, here HR used a simple checklist of buzzwords they didn’t understand to screen candidates, rather than understanding the basic job requirements….HR didn’t want to and/or just didn’t know how to screen for (1) brains, (2) energy level with track record, and (3) integrity—what I was specifically demanding. HR would provide me two stacks of resumes each week. Your resume was in the “wrong” stack because of the buzzword score. But HR had not provided enough candidates, so we decided to bring you back in anyway, regardless of the scoring. You then answered my questions and clearly demonstrated you met my three fundamental requirements. I was pretty sure that we could provide the on-the-job training needed to close the application experience gap. That said, in all candor, I never expected that you would take the initiative that you did to close the gap to the extent that you did, as rapidly as you did.
Lesson Learned: When Hiring for Key Positions, Understand What Really Matters. A job posting today can bring in hundreds if not thousands of job applications. Modern recruiting systems, in response, can sort through them and prioritize them into electronic piles, like Smith’s HR group did with paper 40 years ago. Although we need some way to prioritize applicants, the process has to start by thinking through what really matters in qualifying a candidate—not just picking buzzwords. Fortunately, for me, I had Rodger sitting on the other side of the desk.

A Bump in the Road

My employment at Smith continued until 1983, when it took a short detour due to the oil crisis and a massive layoff. I had been promoted into first-level IT management by that time. Although I was not part of the layoff, the head count reductions meant I needed to be demoted back to a systems analyst and project manager role. Even worse, that Bit Record Database project I was leading was put on hold due to the downturn. This was too much for me, so I reluctantly took a job offer from Gemco, a now-defunct membership department store, to go back to my retail industry roots. But I was only there for a few months when I got the call from Smith to come back to restart the Bit Record project. I agreed, but only if I could do so as an independent consultant. This launched my consulting career. More on this in a future post.

Update: For the next lesson learned from my Smith Tool days, see: What I Learned from a Waterfall Project Failure.  

Footnotes

1Smith Tool had an interesting history. The firm was founded in Southern California in 1902, a time when California rivaled Texas and Pennsylvania in the oil industry. (You can still see oil field pumps in parts of Orange and Los Angeles counties.) Smith started with fish tail bits but eventually began manufacturing three-cone rock bits, which had been invented by Howard Hughes, who founded Hughes Tool. In 1975, Smith tried to get two Hughes patents invalidated but lost the case in 1986 and was ordered to pay over $200M for patent infringement. Using my bit record database, I worked behind the scenes with Smith attorneys during that litigation, and my analysis was submitted in court. I may share more about this in a future post. Smith Tool, along with the rest of Smith International, was acquired by Schlumberger in 2010, and is now known as Smith Bits.

2In addition to the career opportunities, Smith Tool was only about five miles from our home in Irvine. With new baby Steve on the way and only one car, I bought a 10-speed and was able bike to work several days a week, which did wonders for my cardio-fitness.

3When I arrived, the IT environment at Smith Tool was two IBM 3033 mainframes running IBM’s Disk Operating System (DOS). Smith was reportedly IBM’s largest customer running this lower-end mainframe OS. The only commercial software in the portfolio was MSA for payroll and HR. Everything else was custom batch systems, which had been developed by Arthur Anderson (now Accenture) several years earlier. Most of them had a similar design. I also began developing online (CICS) applications around this time.

4In writing this post, it was hard for me to understand what Rodger saw in me. It must have been that the demand for manufacturing systems developers far exceeded the supply. They had no choice but to train them. I think it must have also been my academic credentials, along with my experience with both COBOL and assembly language (which would soon be needed). They were also interested in my experience at Macy’s with IBM’s DOS operating system and my experience at TRW with IBM’s high-end MVS operating system. In reviewing an early version of this post, Rodger confirms that all of these factors came into play and that my DOS/MVS experience was particularly attractive. Eventually, Rodger assigned me as the co-project manager for the company’s DOS to MVS migration. My co-PM was young systems programmer named Wayne Meriwether. Since then, Wayne and I have had many business relationships: He has been my client, we have been business partners at Strativa, and after he left Strativa, he came back as a subcontractor. He remains a friend and close associate.

5I am having difficulty tracking the history of COPICS. Some sources indicate the product was announced by IBM in 1979. But I was working on the MRP module in 1978, near the beginning of the “MRP Revolution.” So, it might be that this field-developed program, written or customized by Roly was incorporated into COPICS. If so, my bug fix would have become part of the commercial product.

6Even as a young man, Roly White was a remarkable character. When he would show up, we all stopped work to hear what he had to share. He was a short, wiry guy, with the standard-issue IBM white dress shirt—but always with an unbuttoned top button, his tie loose, and a cigarette dangling from his fingers. He became a pillar of the Orange County APICS community, where he served for many years. He passed away in 2016, and there is a scholarship fund in his name.

7The larger milling machines could produce much greater volumes of parts, but they took several days to set up for each job. If you understand the relationship between setup time and lot size, you know what this means for inventory levels. The CNC machines could combine multiple operations in one quick setup, but they only produced one part at a time. Understanding these relationships led in a few years to the lean manufacturing movement, which followed the MRP revolution. Contrary to the belief of some, the two are not conflicting theories.

8 Over the years, Nick Testa and I have had many business relationships. I have been his student, and after I got certified we went on to be co-teachers and co-developers of APICS curricula. Later we became co-workers running two system integration practices, and later he became a subcontractor to me at Strativa. He remains a friend to this day.
 
Photo Credits

1. Creator: Robert Yarnall Richie. Credit: DeGolyer Library. Via PICRYL. Note: the rock bit here is not a Smith Tool bit but that of a competitor, Reed. They are nearly indistinguishable.  This one is a milled tooth bit, in contrast to the higher-end tungsten-carbide insert bits. 

2. CNC Machine Facility, Wikimedia.

3. SDM2 Method, which is similar to that of SDM/70. Wikipedia.








Thursday, July 29, 2021

Amazon and Workday Part Ways on HCM

I missed the news earlier this week that Amazon and Workday called off the implementation of Workday HCM. Apparently this is only coming to light now, even though the project was abandoned more than 18 months ago. How something this big was not leaked earlier is a mystery. 

Phil Wainewright has a thoughtful post on the subject. He writes: 

Questions remain concerning e-commerce giant Amazon's discontinuing of its wholesale deployment of Workday HCM and Payroll, which came to light this week after a report in Business Insider. Workday subsequently published a blog post confirming that the two companies had "mutually agreed to discontinue" the deployment more than a year and a half ago, which was over three years after Amazon first signed up to the deal in October 2016. The deal was announced in February 2017, shortly after Workday announced retail giant Walmart as a customer, a deployment that has since successfully gone live.

On a positive note, the project is ending without litigation. And, according to Workday's blog post, it will continue its partnership with Amazon's AWS for its cloud infrastructure, as well as its implementations with other Amazon subsidiaries, such as Audible, Twitch, and Whole Foods. 

What Happened?  

The Business Insider report, based on an anonymous source, says "the database behind Workday's software didn't scale as planned to fully support Amazon's rapidly growing workforce."

Workday disputes this.  It writes: 

This was not related to the scalability of the Workday system, as we currently support some of the world’s largest organizations, including more than 45% of the Fortune 500 and more than 70% of the top 50 Fortune 500 companies. In addition, more than 70% of our customers are live, including one of our largest customers — a retailer — across its more than 1.5 million global workers.

Workday, rather, writes that the project failure has to do with Amazon having a "unique set of needs that are different from what we're delivering for our broader customer base." It also writes, "At times...customers have a unique set of needs that are different from what we’re delivering for our broader customer base, as was the case with Amazon — one of the most unique and dynamic companies in the world."  

How Do I See It? 

All I can do here is read between the lines. 

First, I don't think the Business Insider's claim of a Workday scalability problem is credible. Workday doesn't name its large retail customer, but no doubt it is Walmart. In 2020, Amazon had about 1.3 million employees, while Walmart had about 2.3 million. So, as my late business partner used to say, there is "an existence proof" for Workday being able to scale to support enterprises with multi-million employee counts. 

Then, what about Workday's claim that Amazon had some unique requirements that are different from what Workday provides for the rest of its customers? 

This has a ring of truth to it.  Amazon is unique in many ways, and it would not be surprising if this extends to how it hires, retains, and manages its workforce. As a SaaS provider, Workday cannot afford to customize its core architecture and process to accommodate a single customer, even one as large as Amazon. It is commendable that Workday was willing to walk away from a large opportunity like this rather than compromise its core architecture. 

On a much smaller scale, Plex (a cloud manufacturing ERP provider), in its early years, used to make customer-specific customizations to its core multi-tenant code base. Later, it paid the price to move those customers off those customizations and back to its common core. To my knowledge, it is still trying to do so. Workday is not about to make that mistake. (Interestingly, Plex itself is a Workday client and partner.)

What Happens Next? 

Workday writes that it and Amazon may revisit the HCM deployment in the future. But for now the project has been discontinued. This leaves Amazon on its legacy Oracle PeopleSoft HCM system.  

This is where the plot thickens. There is no love lost between Amazon and Oracle. With its Redshift offering, Amazon looks to shift Oracle customers away to Amazon's data warehouse. Oracle, in turn, looks to compete with Amazon with its own cloud infrastructure offering. Naturally, Amazon has been working to disentangle itself from any use of Oracle products in its internal operations. Having to remain on PeopleSoft has to stick in the craw of Jeff Bezos. This might explain why the project may be revisited in the future. 

There are not many HCM options for enterprises the size of Amazon.  PeopleSoft is a legacy platform, with Oracle's HCM Cloud as its successor. But Amazon is not likely to increase its dependence on Oracle. Workday is the obvious alternative, which is why, despite the project failure, it still might be "revisited. 

So, is SAP an option?

Sunday, June 27, 2021

What I Learned at TRW Credit Data about Enterprise IT

TRW Credit Data disk drive farm
This post continues my look back at lessons learned from my career in enterprise IT. My first such job, at Macy’s headquarters in New York City, came to an end in late 1976 when Dorothy and I, now with an infant daughter, Susanna, moved to Southern California. 

In that first post I shared how, starting as a trainee, I eventually took over support for Macy’s daily credit card billing cycle. I was also responsible for the export of customer information for monthly submission to TRW Credit Data. Now known as Experian, it was and still is one of the three major consumer credit reporting services in U.S. When I announced my decision to relocate to Southern California, my primary user manager, the Accounts Receivable director, graciously offered to pass my name on to his sales rep from TRW Credit Data, which just so happened to be based in Orange County. 

That referral turned out to be critically important. When I arrived in SoCal, I started responding to classified ads and had three job offers within two weeks.1 But the offer from TRW was the best. Some months later, my new manager, Rick Range, told me, “I don’t know what kind of pull you had back at Macy’s, but our sales rep was pretty insistent that I had to interview you.” 

Lesson Learned: Value every business relationship. You never know which ones will be instrumental in the future. And, be sure to “pay it forward” whenever you can. Today, LinkedIn and other social networking sites make it easy to stay connected, but too often we still do not maintain those relationships. 
 

One of the World’s Largest Data Centers

At the time I was hired, the offices and data center of TRW Credit Data (officially, Information Systems & Services, or IS&S) were in unmarked buildings on Katella Avenue in Anaheim.2 The data center then was one of largest in the world in terms of storage. Maintaining credit history on over 200 million Americans took a lot of disk drives in those days. The photo at the top of this post (source) shows the disk drive farm from the early 1980s, a good five to eight years after I worked there, but it gives a sense of how enormous it was. At the time I was there, there was one IBM mainframe and one Amdahl plug-compatible machine, to maintain some balance of power with IBM. 

It was such an impressive site that it served as a location for the 1985 film Prime Risk. The screen shot below is from that film. 
 
 

Dumpster Diving

On my first day, Rick gave me a facility tour, including a walk through that data center. But there was an unusual stop on the tour. Rick took me out behind the data center to the loading dock, where there were several large dumpsters filled with green bar computer printouts to be scrapped. They all had locks on them. He explained that, in the past, criminals had searched through those dumpsters and had obtained access credentials into TRW’s consumer credit database, which they used to steal credit card numbers. This led TRW to secure those dumpsters. 

Lesson Learned: Everyone in the enterprise must be security-minded. Many people think IT security and identity theft only became a problem with the dawn of the Internet. Not true: It was a problem back in the 1970s, and TRW took active countermeasures against it. Security not only included technical measures, but also physical security, such as securing dumpsters. Moreover, it was not just outsiders who were a threat. The company was highly sensitive to insider threats—employees inappropriately accessing credit reports, either out of curiosity or for more nefarious reasons. The system monitored all access and audited which employees accessed which credit reports. If you accessed a credit report for no legitimate reason you could be fired on the spot. 
 

A Theoretical Foundation in Software Development

Because of my experience with accounts receivable, Rick assigned me to the subscriber billing system, which was written in COBOL. I also developed new systems for sales reporting and a new system to handle billing for a new credit reporting system acquired by TRW. As I’ll note shortly, I was only at TRW for about 18 months, but I wrote a lot of software.3 

It was also a great place to develop my programming and software design skills, with courses taught in house. I also enrolled in several graduate courses at the University of California Irvine (UCI), which was and still is one of the top schools in the country for computer science. It was also convenient, being only about a mile from our home. 

My favorite course was Introduction to Software Engineering, taught by Peter Freeman, who later went to Georgia Tech the become the Founding Dean of the College of Computing. Despite the name, the course was anything but elementary. We covered the work of software luminaries such as Ed Yourdon, Daniel Teichroew, Ed Dijkstra, B. H. Liscov, Michael Jackson, Grady Booch, Fred Brooks, and Glenford Meyers. Professor Barry Boehm came down from Los Angeles to give one guest lecture—coincidentally he also worked for TRW, in its Defense Systems Group. 

The course ended for me on a high note. The final class exercise was to take one of the methodologies we had learned and apply it to a theoretical project. But I had a real project assignment from work—to develop a system to maintain accounting tables for the billing system—and I decided to apply three of the techniques (i.e., HIPO, Structured Design, and Jackson Design) to develop the system. I scored an A+ on the project, and Dr. Freeman wrote to me, in part: 
An outstanding project! This project shows much thought and care in its preparation. This perhaps is because you intend to implement the system. 
I was most interested in your use of different design techniques depending on what stage of design development you were at. This is quite an innovation. Also, as your critique reveals, this choice gave you the opportunity to evaluate the utility of each of the design techniques used. Impressive. 

Keep up the good work (and ask for a raise). 

After this class, I was so thirsty for knowledge I applied and was accepted in UCI’s Master’s program for computer science. But with a full-time job and one-year-old Susanna at home, it was more than I could handle, and I never enrolled. Still, I continued to take other classes over the years.  

Lesson Learned: Never stop learning. Many of the principles underlying today’s best practices in software development have their roots in principles we learned in those years. For example, object orientation is consistent with Structured Design principles of cohesion and coupling. Although the Agile Manifesto was decades in the future, it had a precursor in Barry Boehm’s promotion of spiral development. And, as I continued my studies, I found myself more and more interested in moving my focus earlier and earlier in the software development process, from coding to system design, to functional design, and to business requirements. This would be a sign of where my career would be heading.
 

Large Scale Project Management

TRW tape library
There is another interesting chapter in the history of TRW Credit Data. As noted earlier, its consumer credit database was one of the largest in the world. In fact, it was so large that we actually needed three databases to cover the U.S.—with an East Coast, Midwest, and West Coast database. 

But now, imagine what needed to happen when a consumer moved from one region to another. The system had to move that consumer’s records from one database to another. For this reason, and others, there was a process at the time called “Reorg” that had to run, I believe, weekly. In fact, it took the whole week—when the data center finished one reorg, it started the next one. It was a burdensome tape-oriented process. The 1985 film Prime Risk, mentioned earlier, has one scene as shown in the screen shot nearby, which shows row after row after row of the enormous tape library. 

It soon became evident that the entire consumer credit database would need to be rewritten. The project had just been launched when I started with TRW in 1976, under the unassuming name “Project Rewrite.” Shortly thereafter, management gave it a new name, “Project 78.”  The reader may guess what the 78 stood for. In any event, the project was was suspended  until it was restarted in 1994 as Project Copernicus and completed in 1996 under the project name File One, with a $110M budget and 400+ person project team. As I understand it, the big reason the project was finally completed was that it was a contingency for the leveraged buyout that allowed TRW to spin off the business as Experian.4 

Lesson Learned. Manage the schedule. The number one reason projects exceed their budgets is because they do not meet their schedules. Conversely, spending what it takes to push a project over the finish line can be a better choice than letting a project run unconstrained in its schedule and not delivering on its objectives. It also helps when there is a critical business need with no other option, as was the case here. A decades-long project schedule slip is unusual, of course, but the principle applies to more typical projects. 
 

Time to Branch Out from Accounting Systems

As noted earlier, I only stayed at TRW for about 18 months. I was still young in my career, and I was itching for a place where I could continue to develop new skills. Accounting systems were getting boring to me, and I read in Computerworld that there were opportunities in manufacturing. So, I started another job hunt. But that will need to wait for the next chapter. 


Footnotes

1Cross country job hunting in 1976 was nothing like it is today. There was no internet, no online job boards, email, or video conferencing. When Dorothy and I decided to relocate, my only information regarding job opportunities was the classified ads in the Los Angeles Times, which I managed to pick up at a NYC newsstand. That at least gave me an idea of what kind of jobs were out there and where they were located. But no employer in SoCal was going to consider, sight unseen, a programmer/analyst candidate with less than three years’ experience from across the country. So, the job hunt really couldn’t begin until I physically relocated. 

2The office building had originally been some sort of motel, and there was an open courtyard in the middle with palm trees. Muzak was piped throughout the facility. It was quite a pleasant environment, far from the hustle and bustle of Manhattan. The office building and data center were unmarked, and for good reason. Consumers who had been denied credit based on TRW reports had been known to enter TRW offices and threaten employees. I shared an office, which had originally been a motel room, with another co-worker, Ken Romans. He was working on the core credit data system and had (and still has) much deeper technical skills than I did. Within the first year, TRW moved the offices from the converted motel to a new midrise office building in The City in Orange. But the data center continued on Katella until 1992, when TRW moved it to Texas.

3In an interesting twist, a family friend worked at TRW decades later. She told me that, during their work remediating the Y2K problem, she heard developers refer to a program as “so old, it must be one of those written by Frank Scavo.” I wonder if any of those programs are still in production. 

4There is another twist to this story. The program director who eventually took File One over the finish line was Don Miller. Don was a close friend and associate of Dan Husiak, who came to work for me in 1998 and in 2000 became my business partner and co-founder of Strativa, our management consulting firm. Don became one of our trusted advisors. As a result, other ex-Experian executives came into my network, including Michael Scharf, Don Lavoie, and Will Sproule.

Sunday, June 13, 2021

A Personal Experience of Lateral Thinking

Pluto, dog
I missed the news earlier this week of the death of Ed de Bono. De Bono was the father of lateral thinking, a framework for creative thinking. Wikipedia has a decent outline of his ideas and influence. 

De Bono was one of the authors that most influenced my consulting career. I adopted many of his tools, most of them quite simple, in analyzing business problems and coming up with creative solutions. The tools are especially useful in facilitating groups. I only regret that I have not applied his methods more consistently in client engagements and even in my personal life. 

One of my favorite tools is the random word stimulation, which is great for brainstorming sessions. Simply put, when you are trying to come up with ideas that are "outside the box," you generate a random word (e.g., pick it out of a dictionary) and use that word as a jumping off point to come up with new ideas. The key, as always with brainstorming, is not to look for ideas that make sense, just generate as many as you can.  When you run out of ideas, do another random word, and another. As with any kind of brainstorming, no judgment of the ideas is allowed.  Save that for a later step. 

In explaining this method, I like to point to one experience from a consulting engagement many years ago. My late business partner, Dan Husiak, was leading a client engagement to develop a new business strategy for a division of what was, at the time, one of the largest health plan providers in the U.S.  We had scheduled a brainstorming session the next day, and Dan assigned me to facilitate the session. The objective was to come up with some new out-of-the-box ideas for new products or lines of business. 

I suggested we use random word stimulation and explained how to do it.  Dan didn't like it. 

Dan: You mean you just pull out some random word, like "Pluto?" 

Me: Okay, good example. Do you mean Pluto the planet, or Pluto the cartoon character? 

Dan:  I don't know, just "Pluto." 

Me: Okay, let's think about this. Pluto the planet. It's small, it's far out. We need far-out ideas. Not much there. Now, Pluto the dog. He's a dog. He's an animal. He's a pet.  Hey, we can offer health insurance for pets! 

That was enough to let Dan give me permission to try it the next day. 

The brainstorming session was a success. In fact, at the beginning of the session, I used the "Pluto" story as an example of how to use random word stimulation. After a couple of hours, the client project team had come up with a number of promising new ideas--enough for us to start evaluating them the next day.  

The best part of this story is that at the end of the session, the top executive for the firm's Medicare HMO product line came up to me and said, "I want to talk to you about that pet insurance idea. You know our seniors love their pets, and pet ownership correlates with positive outcomes. We should look into how to offer a pet HMO."  

Keep in mind, this was before health plans for pets were a widespread practice. 

Thinking tools like random word stimulation are not only effective in creative thinking and problem solving. They are also fun. 

Ed de Bono will be greatly missed. But fortunately he left the world with a long list of books, courses, and other publications for learning how to think.  A good place to review them is his website. 

Thursday, March 18, 2021

Enterprise Buyers Not Looking for a One-Stop Shop

There's been an interesting discussion on Twitter over the past few days, which I started with this deliberately ambiguous tweet. 

IMO, very few enterprise buyers are really looking for a "one-stop shop." 

As intended, that brought out replies from several friends and associates, such as Vijay Vijayasankar, Oliver Marks, Holger Mueller, Jody Lemoine, Shane Bryan, John Appleby, and others. 

So, what did I learn from the dialog? 

First, I was thinking back to client meetings I've sat through over the decades, where business leaders positioned "one-stop shop" as a key element of their desired strategy. 

In other words, in the market we serve, customers are typically looking for 10 things.  But today, we only offer seven. If we can offer all 10 things, we can become a one-stop shop! Customers will not have to go anywhere else but will have the convenience of having us satisfy all their needs. 

In enterprise software, this might translate to an ERP system vendor attempting to offer a CRM system or supply chain management suite, or product data management, or a host of other complementary products. Invariable, because these systems take years to develop from scratch, in practice this means acquiring those complementary products. It may also mean offering other elements of a complete solution, such as a development platform, tooling, system integration services, even databases or hardware. 

I don't know if Oracle ever used the term "one-stop shop," but it certainly behaved as if it had. It has been on a multi-decade acquisition spree, not only in business applications, but also in databases (its roots), infrastructure software (BEA), even hardware (Sun). To be fair, it also plowed profits from those products into new development, such as for its Fusion cloud applications. And it is now competing with Amazon for cloud infrastructure services. It is a poster child for the one-stop shop. 

SAP has had its own version of the one-stop shop, acquiring a variety of systems (Holger calls some of them the seven sisters). It also built its own proprietary database, and it also has its own development tooling. 

What About One Throat to Choke? 

One can imagine why such a strategy might be attractive to technology sellers.  But is it attractive to technology buyers? 

I say, no.  In decades of consulting, I don't think I've ever heard a client say, I just wish I could buy everything I need from a single vendor. What I need is a one-stop shop. 

But isn't a one-stop shop the same as "one throat to choke?" I say no. One throat to choke means that in a system implementation, for example, there is a prime contractor or service provider ultimately responsible for delivery. If another partner in the deal is not meeting its commitments, the prime contractor or service provider serving as overall program manager is responsible.  It doesn't mean that there is only one service provider or vendor in the deal. 

What About Integrated Suites? 

Holger asked, "Are you saying that [integrated] suites are done?" Not at all. But I have two responses to this. First, many integrated suites are anything but.  Especially if, as noted above, the vendor built its suite from piece parts that it acquired over time. It takes years to integrate software acquired from various sources. So, buying from a vendor attempting to be a one-stop shop does not ensure you are really getting an integrated suite. 

Second, I have seen very few large deals where there was only a single software provider in the deal. There are almost always complementary products whether they be for sales tax reporting, factory data collection, data analytics, or countless other niche requirements. 

Third, no IT organization's application portfolio only has software from a single vendor, not even a handful of vendors. Even small companies buy software from dozens of vendors. There is no one-stop shop in enterprise software. 

What About Application Rationalization? 

But what about vendor consolidation? Maybe one vendor isn't reasonable, but isn't it a good idea to limit the number of software providers and rationalize the applications portfolio?  Certainly, many companies need to consolidate applications, especially if they grew through mergers and acquisitions and now have two, three, or more ERP systems, for example. 

But that does not mean they need to only buy from one vendor. 

Vendors love to talk about vendor consolidation, as long as the surviving vendor is them. They call this gaining in their "share of wallet," as in the buyer's wallet. 

In my view, when it comes to vendor consolidation you can have too many vendors and you can also have too few. You don't want to have so many vendors that you have redundant types of systems. On the other hand, you don't want to have too few vendors to the point that they gain leverage over you.  

To this point, I've heard of customers engaging in multi-year programs specifically to reduce dependence on certain Tier I vendors, as they become too powerful and attempt to engage in wallet fracking, as my friend Brian Sommer calls it. 

Is there a way to have the benefits of integration and applications rationalization without becoming overly reliant on a single vendor?  I think there is.  Modern cloud systems have become API-oriented. And to be fair, the major vendors, even those aspiring to a greater share of wallet, are building with this model. They have to, if they want market acceptance. Cloud leaders, such as Salesforce, do it by providing a platform that partners can write to, even leveraging Salesforce objects, to provide that integration. Oracle's NetSuite offers a similar capability. Cloud ERP vendors, like Acumatica, Plex, and Sage Intacct are very integration-friendly. Oracle's cloud applications and SAP's offer open APIs, as does Workday. Microsoft has similar capabilities. 

If this is the future, then maybe vendors will give up the strategy of the one-stop shop. 

Wednesday, March 10, 2021

Deploying Low-Latency Applications in the Cloud

Cloud has become the preferred deployment option for most categories of enterprise systems. But conventional wisdom is that some systems that require low latency and a high degree of system availability, such as warehouse management systems (WMS) or manufacturing execution systems (MES), are best deployed on-premises.

The argument is that response time over the Internet is never as fast as over a local area network and that such systems cannot tolerate any level of unscheduled downtime inherent in using a cloud application. 

Nevertheless, cloud systems are invading even this category of software. Although still in the minority, there are some vendors providing such low-latency applications as a cloud service. 

One example is Plex Systems.  And, interestingly, it is not a new example. 

Read the rest of this post on the Avasant website: Deploying Low-Latency Applications in the Cloud.