Tuesday, August 29, 2023

Opportunities and Challenges with Generative AI

Although artificial intelligence originated in academic research in the 1950s, only recently has it captured the imagination of the general public. This has everything to do with the release of ChatGPT, which putg a powerful generative AI tool in the hands of individual consumers. But what are the opportunities it brings to businesses? And what are the challenges we face in using it?

I blogged about this back in February, not long after ChatGPT was released, in my post, ChatGPT for Industry Research: Not Ready for Prime Time. This was based on my early testing of the technology. Since that time, use cases by industry have started to surface, and there are many promising opportunities, just a few of which we discuss in my interview. But the risks and concerns still remain. How can we realize the opportunities, while minimizing the risks?

Read a summary of the interview on the Avasant website with the link to the full video.

Frank Scavo video interview on generative AI

Monday, August 21, 2023

A Teams Model for Effective Innovation

This post continues my series on lessons learned in my career, the ideas that influenced me, and the people who helped me along the way. This post is on the role of teams in developing and implementing innovation.

Most of my career as a consultant over the past 40+ years has involved innovation in one way or another. That’s what originally drew me into consulting. But innovation is rarely the domain of individual contributors. Innovation is a team sport. The most interesting and exciting times in my career have been when I could participate in a team focused on some sort of innovation. These experiences included developing new systems, building several consulting practices, developing new research publications, or participating as a consultant on a client’s team. 

So, it is critical to understand how team members can work together most effectively to bring an idea to reality. And this includes understanding the roles that each innovation team requires and the stages that the team passes through. 

Two Conceptual Models for Working Together

For most of the 1990s, I was a consultant for a systems integration firm in Orange County, California (no longer in business). During that time, I first managed two groups of ERP implementation consultants and then launched a management consulting practice within the firm. I also developed most of the firm’s internal training and consulting methodologies. Because the owners of the firm knew how important teaming was to our success, they brought in an outside consultant, Dr. Karol A. Bailey, to train us in two behavioral profiling tools. 

  • The first was DiSC, an assessment tool to help individuals better understand themselves and others, along with their preferred work styles. Originating in research from the 1920s, DiSC has gone through multiple iterations and refinements over the years and is still in widespread use today. It is now owned by John Wiley & Sons and is available through its authorized resellers. It is a powerful tool, and I still apply it today in my personal interactions and collaboration with others. My long-time associate Dee Long became a certified DiSC trainer and has been a great help to me in continuing to apply it over the past three decades. 
  • The second was what we knew, at the time, as the C.A.R.E. profile [1]. Although this model is synergistic with DiSC, it was developed independently. It specifically focuses on the roles that are needed in any team and the stages that an innovation should go through to successful implementation. 

The C.A.R.E model is illustrated in the schematic below, which I’ve drawn from memory and earlier training material. It recognizes that a successful team moves an idea through four distinct phases, in sequence, forming a Z-pattern.  

CARE model of teams
Click to enlarge

  1. Creators. These are the idea people, who dream of new possibilities. They often start sentences with, “Wouldn’t it be great if ___________”. 
  2. Advancers. These are those who take the idea and run with it, communicating and promoting it inside and outside the team. Through interactions with others, they test the idea to see if there is—or could be—a market for it. 
  3. Refiners. These are those who analyze the idea to find issues or problems that stand in the way of success and develop solutions resolve the problems. 
  4. Executors. These are the team members who oversee the implementation and, if appropriate, support it on an ongoing basis. 

The two roles on the top—Creator and Advancer—are focused on possibilities, what could be. They have their heads in the clouds. The roles on the bottom—Refiner and Executor—are focused on realities, what is practical. They have their feet firmly planted on the ground. The two roles on the left—Creator and Refiner—are focused on analysis. They like to work with abstract ideas. The two roles on the right—Advancer and Executor—are focused on relationships. They like to work with people.

The C.A.R.E model also recognizes a fifth profile, the Flexer. This is the least common profile. These are individuals who by nature can serve in any of the other four roles. They are like utility players in baseball, able to play any position. They are also good at facilitating the process of moving the innovation from one stage to the next in the Z-process. You don’t need to have a flexer on your team, but if you have one it can be quite valuable. 

Moving Through the Four Stages

It is important to realize that to ensure success, any innovation must pass through these four stages, and skipping over a stage will lead to failure. For example: 

  1. Jumping straight from creation to execution. Some creators are so excited about the idea that they want to implement it immediately. “Let’s just do it!” they exclaim. Organizations with this culture tend to launch many new initiatives, most of which wither like flowers without water. 
  2. Skipping the Advancer stage. This sometimes happens when the Refiners look at the new idea and immediately see problems with it. They look at the Advancers as cheerleaders, not realistic about what it will take to make the idea work. They don’t realize that someone first needs to communicate and promote the idea, to see if there really is a market for it. Without advancers, the idea suffers “paralysis by analysis.” Refiners by nature wear what Ed De Bono called the black hat (seeing the negative). First the idea needs some promotion, for team members to put on what De Bono called the yellow hat (seeing the positive).  
  3. Skipping the Refiner stage. This happens when the Advancer stage shows the idea has legs and has good possibilities. The team gets excited and wants to move straight into execution. But without analyzing the idea and resolving any issues the innovation will likely fail in execution. Few ideas are perfect in their initial conception. Some refinement is almost always needed. It is like the testing phase in software development. No system can go straight from development to production. It is important to see Refiners not as naysayers but having an important role to play in perfecting the innovation so the idea will succeed. 
  4. Not following through to execution. This happens when the team does not have many hands-on doers. It is an even greater problem when the idea is a product or service that needs ongoing support and management. Organizations like consulting firms that are mostly project-based businesses can have this problem. They are good at managing projects that have a beginning and an ending with a defined set of deliverables. But they may not have many people with skills and process-orientation to manage something day in and day out. 

Even if a person is not assessed as a Flexer, he or she may be comfortable in more than one role. Many team members will have a preferred role while also gravitating toward a second role. One common combination is Creator/Advancer—those who come up with new ideas and are also good at promoting them to others.  Another is Creator/Refiner—those who are good at conceiving new ideas and also good at perfecting them. Another is Refiner/Executor—those who refine the idea and then implement and manage it going forward. 

My Preferred Role

So, how did I test out? I am a Creator/Refiner. I love coming up with new ideas, and I am also good at analyzing them and refining them to make them better. At the same time, I may not be the best person to advance an idea. In fact, the Refiner-side of my profile means I tend to get nervous when the team rushes to promote an idea (especially if it wasn’t my idea!). As an analytical person, I tend to see the problems, the defects. So, I need to remind myself that ideas need to be promoted before they can be refined. There is a time for promoting and a time for refining. 

I am also not natural as an Executor. Of course, having owned two businesses for twenty years, I could not avoid ongoing operations. But I have always done my best when I had team members that were good at execution, with an attention to detail so I could do what I do best. Fortunately, I was blessed over many years to have had a few business associates who were excellent as Executors [2][3]. 

Although the original C.A.R.E assessment is no longer in commercial distribution, it is not difficult for individuals to figure out what roles they prefer to play. The important thing is for the entire team to understand the four roles and to move an innovation idea through these four stages. This will lead to greater appreciation for others and their unique contributions to team success. 

End Notes

[1] The C.A.R.E assessment was later rebranded as Team Dimensions, which, like DiSC, is also owned by Wiley. Although Wiley no longer markets it, it may be available in different forms through other providers.

[2] One was Barbara Newton, whom I’ve known for 30 years. She worked with me and Dee Long at that systems integration firm I mentioned earlier. She then joined my partner and I when we launched the consulting firm Strativa in 2000 and later acquired Computer Economics in 2005. She was responsible for all of the research publication processes as well as client services. She stayed on through our acquisition by Avasant in 2020 and retired in 2021. 

[3] Another example is Sherry Maples, who joined us in 2001 and stayed on for nearly 20 years. She ran the accounting function and, with Barbara, managed the back-office processes for the two companies, freeing me to focus on consulting and research. She retired in 2019. She was incredibly detailed oriented, which is exactly what I needed in those years. 

Tuesday, February 14, 2023

ChatGPT for Industry Research: Not Ready for Prime Time

ChatGPT: Four Errors in a Single Sentence
Since its unveiling in November 2022, there has been a lot of buzz around ChatGPT—OpenAI’s generative AI chatbot. Although there are similar projects, such as Google’s Bard, ChatGPT got an enormous boost as it was free, easy to use, and could be deployed in a variety of ways—from answering questions to writing essays, poems, and song lyrics, to coming up with recipes, to writing software code, based on simple prompts from the user. 

One inevitable question of course was, to what extent would ChatGPT threaten jobs?

For me, it’s an important question. I direct a team of IT and industry research analysts at Avasant. Is ChatGPT a disruptive threat to our business? Would competitors, with less-skilled workers, be able to generate research as good or in greater quantities than what we could produce? Would clients even need research firms like ours at all, if they could generate their own reports, using these tools? Or, can we leverage a tool like ChatGPT to make us more productive? 

Putting ChatGPT to the Test

To find out, I did a series of tests to see if ChatGPT could write a short research note. 

One of my tests was to ask ChatGPT to write about something I am intimately familiar with: The history of the IT research firm Computer Economics, which my partner and I acquired in 2005 and I sold to Avasant in 2020. (I still manage this business as part of Avasant Research.) 

Here is the transcript of my prompts and the responses from ChatGPT, lightly edited for length, along with my evaluation of ChatGPT. 

Prompt: “Provide a history of the IT research firm Computer Economics.”

ChatGPT responded: 

Computer Economics is a research and consulting firm specializing in the field of information technology (IT). The company was founded in 1982 in Newport Beach, California, by John L_________, who currently serves as the firm's President and CEO. 

There are four factual errors in the second sentence of this opening paragraph! Computer Economics was founded in 1979, not 1982, as documented in this Wikipedia entry. (Wikipedia is a primary source for ChatGPT's training data, so how did it miss this?) Moreover, the firm was founded in New York City, not Newport Beach, by Bruno Bassi, not John L., and John L. was never the company president. He was our VP of Research, from 2007 until he retired in 2017. ChatGPT just made this all up. 

But wait, there's more! A couple of paragraphs later, ChatGPT writes: 

Computer Economics … has received several awards for its work, including recognition from Forrester Research as a “Leading IT Research and Advisory Firm” and from Gartner as a “Top IT Research and Consulting Firm.” 

But neither Forrester nor Gartner has any such award program. In fact, it would be hilarious if they did since they are both IT research and consulting/advisory firms themselves! ChatGPT just made up that part. So, I gave ChatGPT another prompt and a chance to correct itself. 

Prompt: “Please provide more information about Gartner's award program for Top IT Research and Consulting firms. When was this program established and who were some of the most recent winners of this award?” 

ChatGPT responded (click to enlarge): 

ChatGPT Hallucinations
ChatGPT Hallucinations

Apparently, ChatGPT is not aware of the First Law of Holes: When you find yourself in one, stop digging. 

My prompt asked who some recent award winners were. Now it says the winners are not publicly available. What kind of award keeps the winners secret? Moreover, if the winners are secret, how does it know Computer Economics was one of them? At the same time, the winners must not be secret, because they “can be found in Gartner’s annual report on the market for IT research and consulting services” (which, of course, does not exist).

Risks in the Use of ChatGPT for Research

In summary, here are some observations on the risks of using ChatGPT as a virtual research analyst.  

  1. Fiction parading as fact. As shown above, ChatGPT is prone to simply make up stuff. When it does, it declares it with confidence—what some have called hallucinations. Whatever savings a research firm might gain in analyst productivity it might lose in fact-checking since you can’t trust anything it says. If ChatGPT says the sun rises in the east, you might want to go outside tomorrow morning to double-check it.  
  2. Lack of citations. Fiction parading as fact might not be so bad if ChatGPT would cite its sources, but it refuses to say where it got its information, even when asked to do so. In AI terms, it violates the four principles of explainability
  3. Risk of plagiarism. Lack of citations means you can never be sure if ChatGPT is committing plagiarism. It never uses direct quotes, so it most likely is paraphrasing from one or multiple sources. But this can be difficult to spot. More concerning, it might be copying an original idea or insight from some other author, opening the door to the misappropriation of copyrighted material. 

Possible Limited Uses for ChatGPT

We are still in the early days of generative AI, and it will no doubt get better in the coming years. So, perhaps there may be some limited uses for ChatGPT in writing research. Here are two ideas. 

The first use might be simply to help overcome writer’s block. We all know what it’s like to start with a blank sheet of paper. ChatGPT might be able to offer a starting point for a blog post or research note, especially for the introduction, which the analyst could then refine. 

An additional use case might be to use ChatGPT to help come up with a structure for a research note. To test this, I thought about writing a blog post on the recent layoffs in the tech industry. I had some ideas on what to write but wanted to see if ChatGPT could come up with a coherent structure. So, I gave it a list of tech companies that had recently announced layoffs. Then I gave it some additional prompts: 

  • What do these companies have in common? Or are the reasons for the layoffs different for some of them? 
  • As a counterpoint, include some examples of tech companies that are hiring.
  • Talk about how these layoffs go against the concept of a company being a family. Families do not lay off family members when times are tight. 
  • Point out that many employees in the tech industry have never experienced a downturn and this is something that they are not used to dealing with.

The result was not bad. With a little editing, rearranging, and rewriting it could make a passable piece of news analysis. As noted earlier, however, the results would need to be carefully fact-checked, and citations might need to be added. 

One word of warning, however: In order to learn, young writers need to struggle a little, whether it is by having to stare at a blank sheet of paper or constructing a narrative. I am concerned that the overuse of tools like ChatGPT could deny junior analysts the experience they need to learn to write and think for themselves. 

The larger lesson here is that you can’t just ask ChatGPT to come up with a research note on its own. You must have an idea and a point of view and give ChatGPT something to work with. In other words, treat ChatGPT as a research assistant. You still need to be the analyst, and you need to make the work product your own. 

I will be experimenting more with ChatGPT in the near future. Hopefully, improvements in the tool will mitigate the problems and risks.

Update Feb. 20, 2023: Jon Reed has posted two lengthy comments on this post with good feedback. Check them out below in the comments section. 

Sunday, October 09, 2022

What If You Held a Metaverse Party and Nobody Came?

The metaverse just might be the next big thing, but according to two reports this week, that time is not yet. 

The first story is from CoinDesk, which reports that the two leading decentralized metaverse platforms--Decentraland and The Sandbox average below 1,000 daily users. Yet each is a unicorn, with over $1 billion in valuation. 

What’s going on in the metaverse these days, you might ask. Looking at two of the biggest companies with over $1 billion valuations, the answer is surprising: Not much, or at least not enough to bring users back every day. According to data from DappRadar, the Ethereum-based virtual world Decentraland had 38 active users in the past 24 hours, while competitor The Sandbox boasted 522 active users in that same time.

An active user, according to DappRadar, is defined as a unique wallet address' interaction with the platform’s smart contract.

This matches my own observation a few weeks ago when I created an account on Decentraland. Apart from the clunky graphics, the thing that struck me was, there's no one here! Until I read the CoinDesk report, I thought maybe I was doing it wrong. But apparently not.  

So, maybe the centralized metaverse platforms, such as the Meta (formerly Facebook) Horizon Worlds platform, is where the action is.  Apparently not. According to this report on The Verge, the user experience on Horizon Worlds is so bad that management under Mark Zuckerberg has to encourage, cajole, and beg even its metaverse developers to use it.   

In a follow-up memo dated September 30th, Shah said that employees still weren’t using Horizon enough, writing that a plan was being made to “hold managers accountable” for having their teams use Horizon at least once a week. “Everyone in this organization should make it their mission to fall in love with Horizon Worlds. You can’t do that without using it. Get in there. Organize times to do it with your colleagues or friends, in both internal builds but also the public build so you can interact with our community.”

On the other hand, we are already seeing real value in some early metaverse business applications. Two weeks ago, I co-moderated a metaverse panel discussion at Innovate@UCLA. One of the panelists, Chris Mattmann, Chief Technology and Innovation Officer at Jet Propulsion Laboratory described how JPL is already using metaverse-like digital worlds to great success for employee onboarding, virtual tours, and virtual meetings.  

Early adopters, like JPL, give an indication of where the value may lie. But for now, as far as public metaverse platforms go, it appears we are close to or at the peak of the hype cycle. 

On the third hand, I’ve been wrong before. As I wrote earlier this year: Predictions are hard, especially about the future.

Image Credit: Decentraland, via CoinDesk. 

Sunday, August 07, 2022

An Innovator’s Story: Creating a Business for Lasting Success

Back in May, I had the opportunity to do an on-stage interview with Jamie Siminoff, founder and CEO of Ring, as part of Avasant's Empowering Beyond Summit

Ring, the first provider of video doorbells, is an interesting case study in innovation. Siminoff founded the firm in 2013, and, despite walking away from an episode of Shark Tank with no money, grew it to disrupt the home security industry.

Siminoff eventually sold Ring to Amazon in 2018 for over $1 billion. Now, under Amazon’s ownership, he continues to manage Ring, which has grown to be the largest home security camera brand in the world.

Over on the Avasant website, I put together a summary of  Siminoff’s keynote and my on-stage interview around two broad themes:

  • Lessons learned in innovation, based on Ring’s invention. 
  • How to ensure success when an innovative startup is acquired by a much larger enterprise.

The research byte concludes with Siminoff’s view on how business leaders in traditional organizations can apply the lessons in innovation.

Read the research byte on the Avasant website: An Innovator’s Story: Creating a Business for Lasting Success

Sunday, May 15, 2022

Predictions Are Hard, especially about the Future

Gemco Membership Card
With nearly half a century in enterprise IT, I have had plenty of time to see how technology predictions over the years have been fulfilled—or not fulfilled. This was brought home to me recently while reviewing an old project document.

But first, some context. As noted in my previous post, I felt forced by a business downturn in 1983 to resign from Smith Tool and take an IT manager position at Gemco, a now defunct membership department store, then owned by Lucky Stores. This returned me to my retail roots.

A Prescient Prediction

Although I only stayed at Gemco a few months, I was put in charge of a strategic systems project: To define the requirements for a new merchandising system. We started by interviewing the senior leaders of the firm and worked our way up the organization until we reached the final interview with the CEO, Peter Harris [1].

The interview summary, dated October 18, 1983, is quite interesting, especially in one paragraph where Harris said:
We need to recognize the changes that will come in the next decade due to the spread of advanced telecommunications. It is likely that 50% to 70% of basic hardgoods and commodities will be purchased from home, eliminating the need for store visits. However, apparel and other fashion merchandise will continue to be purchased in store environments, because of the psychological need to “go shopping.”
Today, I do not recall anyone in the retail industry in the early 1980s predicting the dawn of B2C e-commerce. And apparently, even 10 years later, I was still a skeptic. In the margin of that final report, there appears a note, in my own handwriting.
How wrong he turned out to be! –FS, 3/15/93 (10 years later!)
Peter Harris Interview Quote

But little did I know, 1993 was the year that the U.S Congress passed a law to commercialize the Internet, and it was also the year that Tim Berners-Lee invented the World Wide Web. And, one year later, Jeff Bezos founded Amazon. But it took another two decades before a worldwide pandemic pushed B2C e-commerce for certain categories of goods to the levels that Peter Harris predicted nearly 40 years earlier.

So, no, Harris’s prediction was not wrong. He was just off by about 30 years.

Lesson Learned: Keep an Open Mind

As Yogi Berra once said, predictions are hard, especially about the future. Like many others, I tend to be a skeptic, always looking for the negative side of an idea, or what could go wrong. In fact, a few years ago, I wrote a blog post mocking fellow analysts who make year-end predictions. I don't like to make predictions myself and I tend to be skeptical of those who do make them. I have to make a conscious effort to fight this tendency.

So what predictions are out there that might seem far-fetched today but could eventually come into realization?
  • The Metaverse. There are many breathless predictions these days about “the metaverse,” a virtual world where people and organizations can live and interact in a persistent and immersive 3D environment, where they can own virtual property, trade virtual goods, and be educated or entertained. Some argue that the metaverse already exists with various gaming platforms. Others think it is being overhyped by social media companies, such as Facebook (now branded as Meta) that are otherwise out of ideas about how to keep people engaged on their platforms in order to target them for advertisements.
  • Non-Fungible Tokens. NFTs have been a hot market over the past year, with sales of digital art, secured by NFTs on a blockchain, trading for thousands or millions of dollars. The fact that any piece of digital art can be saved with a mouse right-click makes it difficult to understand what exactly an NFT denotes in terms of ownership. The recent and rapid decline in the value of various NFTs confirms to skeptics that they are nothing more than the 21st century equivalent of Tulipmania.
  • Cryptocurrencies. Digital currencies using cryptography, such as Bitcoin, are built using blockchain technology. In contrast to fiat money, such as the US Dollar, they are not backed by a central government but are decentralized, permissionless, and virtually impossible to corrupt. Advocates predict they will replace fiat money, or at least exist alongside it, providing a hedge against inflation and very low transaction costs compared to traditional currency exchanges. At this writing, there is a collapse in cryptocurrency markets, confirming the view of crypto-critics that the whole thing is one big bubble.
It is easy to be a critic, or as Ed Debono taught, to put on the black hat. It is not so easy to see the problems with an idea while at the same time seeing where there could be value. It is even more difficult to predict when exactly that value might be realized.

Sometimes, predictions are not wrong. They just take longer than we think to be realized.


[1] Peter Harris is an interesting person, starting as a stocking clerk at Gemco and eventually working his way up to President from 1980 to 1984, when the firm achieved $2.2 billion in revenues. Later, he and his partner acquired FAO Shwarz, where he took over as CEO until 1992. Later, he became the President and CEO of the San Francisco 49ers (2000-2004), and held several other leadership positions after that. Today, he is retired and serves on several boards, including Palo Alto Medical Foundation. He is still on LinkedIn.

Update, May 22, 2022 

One of the joys in writing this series of career posts is reconnecting with people I worked with decades ago.  So, I sent a message to Peter Harris on LinkedIn.
Peter, I'm sure you don't remember me, but I interviewed you in 1983 at Gemco. I just wrote a blog post about your prediction about E-commerce. [Link to this post.] Let me know any feedback. --Frank
This morning he wrote back:

Frank, I am absolutely blown away to hear from you and read of your perspective, highlighted of course by your absolutely amazing record keeping mention of something I said many years ago.  While I think 30 years early doesn't count as anything beyond being impracticably thoughtful, I was honored and  hugely appreciative to be recognized.  Your article is fascinating and I am now following you so that I might observe and learn from your thinking and musings on other topics.  That you have tracked me down on LinkedIn and shared it means a lot.  The appropriate comments are "way cool," "awesome" or maybe even "wowza."  Thank you so very much.   I'd be interested to hear a bit more than is visible on LinkedIn about what you are doing now if you have time to share. --Peter

[Posted with Peter's permission.]

Update, Aug. 8, 2022

The same year, 1983, Mark Dertouzos made this incredible prediction of the World Wide Web. Click to watch. 
Mark Dertouzos video thumbnail

Photo Credits:

Wednesday, April 20, 2022

The Most Significant System Development Project of My Career

Drill rig
This post continues my series on lessons learned in my nearly half century in enterprise IT. We started in 1974 with my job at Macy’s headquarters in Manhattan, followed by my move to California in 1976 and my job at TRW Credit Data. I then took a job at Smith Tool in 1978, where I got thrown into the deep end with manufacturing systems. This led to several more important lessons learned, including the failure of a waterfall development project and my first encounter with shadow IT.

But there were more lessons to be learned at Smith Tool. 

Next would be the biggest and most important project of my career. Rolling off a series of manufacturing system development projects, I was now assigned to a task force to develop a strategic system to analyze the performance of Smith’s drill bits in the field. I would be the project manager for a small team of developers and the overall system architect. 

An Unspoken Objective

The first phase was to build a bit record database, which would become the foundation for several future systems. The database, which would ultimately contain millions of historical drilling records from around the world, would be used for preparing well proposals, evaluating product performance, conducting competitive analysis, and providing a feedback loop from the field to engineering to improve product quality. 

But there was another, unstated objective. Smith Tool had been sued for patent infringement by Hughes Tool (the business that made Howard Hughes his initial fortune). The patent was for a novel application of an O-ring, which sealed the lubricated bearing of the three roller cones from the harsh downhole conditions. O-rings (made famous for their failure in the space shuttle Challenger disaster) were in common use at the time, but Hughes had discovered that if you squeezed the O-ring a bit it actually extended the life of the seal. This was counter-intuitive, but it worked. The litigation had been dragging on for over a decade, starting with Smith getting a federal court in 1979 to invalidate the patent, and Hughes getting a federal appeals court in 1982 to reverse that decision. That was just before I was assigned to the development project, which would be an important element in Smith’s defense. 

The lawsuit, for about $1 billion, was at that time the largest patent infringement case in history. The lawsuit alleged that Smith’s use of the Hughes patent made Smith’s bits competitive with Hughes, earning Smith profits that it would otherwise not have earned. To defend against the Hughes claim, Smith would need a system to provide the data analysis. 

None of this was mentioned to me at the time. I only knew that the project was getting me a lot of attention from top management. In fact, my old manager, Rodger Beard, recently told me that at corporate headquarters they were talking about how my system would “save their bacon.” 

Lesson Learned: Immerse Yourself in the Business

Shortly after the project kick-off, I learned that there was a week-long training program about to start for new field sales people. I invited myself in and got to sit through detailed lessons on Smith’s products and how they were used by customers. I found the whole week fascinating. [1]

Halfway through the week, Dan Burtt, the IT director, noticed I was not at my desk and found out about the class. “Why is Frank taking sales training?” he asked. I managed to convince him to let me finish. 

Since I had been developing or maintaining many of Smith’s manufacturing systems, I already understood the engineering and production data that would be needed to correlate with field performance. What I lacked was an understanding of that field data. My degree in Geology helped, but all of this was mostly new information. 

There were also some thorny design problems, such as how to designate well locations in different parts of the world, using different coding schemes. I spent several hours at the UC Irvine library learning about various geographic location systems in use in the U.S. and around the world, such as the section-township-range system, originally proposed by Thomas Jefferson

In any new system development project, you have to start with a deep understanding of the business. It is not enough to have users tell you what they need. It’s more than gathering requirements. You have to have a sense of curiosity and immerse yourself in the industry and the business.

Lesson Learned: Take Advantage of Career Adversity

But the oil industry is notorious for booms and busts, and we were heading into a major bust. There was a massive company layoff, and the IT staff was not excluded. With fewer IT personnel, we didn’t need as many first level managers, so I was demoted back to project manager. Even worse, after I finished the requirements definition, my project was put on hold pending budget approval to move forward. This was the last straw. I resigned in August 1983 and returned to my retail industry roots, taking an IT manager position at Gemco, a now-defunct membership department store.

Beta Management Systems Logo

But, after a few months I got a call from Smith. The bit record project had been funded. Could I come back to lead it? I said yes, on one condition: I wanted to come back as a consultant, not an employee. I had been thinking for some time about a consulting career, and this was my opportunity. Smith agreed—I had so much knowledge of the project and the business requirements that it seemed like a small request. 

This launched my consulting career, as a sole proprietor doing business as Beta Management Systems. [2] [3]

Development, Implementation, and a Move into the Business

Now I was back at Smith, leading a small team of developers. I designed the system mostly as an online system (IBM’s CICS) but with a little batch programming to extract manufacturing and engineering data on a nightly basis. As usual, I wrote some of the most important code myself. The database was eventually going to hold millions of records, and it would be used for online analytical processing (OLAP), so it needed to be fast. I designed the database in IBM’s VSAM, and I set up alternate indices to provide quick access for the most common types of standard reporting. This was before the days of widespread use of relational databases, or at least before Smith had one. 

For the OLAP reporting, I used something new. The year before I had gotten trained in FOCUS, a fourth-generation language from Information Builders (acquired by TIBCO in 2021). This was an excellent tool for reporting and analysis, especially for ad-hoc inquires. This is how I would develop the OLAP reporting that would prove instrumental later on in supporting the patent litigation. 

Initial system development took less than a year. I still have a copy of the system user guide, dated November, 1984. Users began loading bit records in 1985. 

As soon as the system went into production, there was no more need for me in the IT department. But there was a huge need in the engineering department, where all that ad-hoc analysis would need to be done against the database. So, I left IT and went down the street to the “Hobie Cat Building” (the former owner) to begin as a consultant in the engineering group known as Product Evaluation. [4] 

Within Product Evaluation, I became part of a small team to develop the OLAP reporting for the bit record system. Looking back, this was the best experience I’ve ever had in a team. There was our manager, Jim Watson, who was a metallurgist by training and product failure analyst. Jim became a personal friend of mine over the years. There was Steve Steinke, a geologist, who provided the knowledge of the oil field. Rounding out the team was Joel Palmer, a statistician, who ensured that our analysis was statistically valid. Then there was me, the systems guy. 

Lesson Learned: Understand Basic Statistics

Textbook cover--Calculate Basic Statistics
Looking back, I now appreciate how the statistical validity of our analysis would be critical. This was important not only because we needed to ensure that the conclusions of our analysis were on a sound footing generally, but also because some of our analysis would be presented in court in Smith’s legal defense. 

I had started out as a math major at UPenn, but I’d never had a course in statistics. So, even though we had a statistician on our team, Smith brought in Dr. Mark Finkelstein, a mathematics professor from UC Irvine, to coach us once a week on basic statistics. He used his own text book, pictured nearby. We learned about descriptive statistics and inferential statistics, regression, correlation, and confidence intervals. 

The key point I learned was this: Just because a data set appears to show a correlation between two variables, it might not be statistically significant. For example, I might be asked to divide a sample of bit runs from a group of nearby wells into three groups according to some engineering parameter. My analysis might show that as the parameter increases, the bit performance improves. But that conclusion might be spurious. On more than one occasion I had to tell the requestor that, even though a graph might appear to support his theory, the statistics did not. 

Eventually, the Smith lawyers asked me to perform statistical analysis in support of the patent litigation. In response to the court ruling that Smith was infringing on the Hughes patent, Smith had redesigned its bits to use an older seal, called a Belleville seal, instead of the O-ring. Smith contended in court that the new seal provided performance equal to that of the O-ring, and my analysis supported that conclusion. But the new seal was more expensive than an O-ring, increasing the cost of a tricone bit by about $29. According to a Los Angeles Times account of the trial: 

According to [Judge] Hupp’s chronology of the events that led to Smith’s using Hughes’ patented device, Smith stopped manufacturing the Belleville-type seals in 1972, in part because they made the Smith device cost about $29.02, or an estimated 3.2% of the total purchase price, more than the competing Hughes product.

Smith’s attorneys argued, therefore, that the damages to be awarded Hughes should be calculated based on the difference in product cost for the half million infringing bits, or about $14.5 million, rather than the billion-plus that Hughes was claiming. 

Bottom line, as I was told: The judge agreed that the performance of the Belleville seal was equal to that of the O-ring but did not agree that damages should be based on the difference in cost. The judge assigned damages of just over $200 million. In other words, we won the battle that I was fighting, but lost the larger war. [5]

My appreciation of statistics would benefit me later in my career, when Dan Husiak and I acquired the IT research firm Computer Economics. I took over the research group, which collected and published metrics on IT spending and staffing. Many times, I was confronted with what appeared to be a correlation between IT spending and some other metric. My experience from Smith Tool taught me to be skeptical if the sample size was small. 

Postscript: Successor System Still Delivering Value

DRS Drilling Record System log in panel
The combination of the court judgment, a continuing downturn in the oil industry, and some poor business decisions was too much for Smith to overcome. The company filed for Chapter 11 bankruptcy protection, divested noncore businesses, and was able to come out of bankruptcy in the same year. I was still working as a contractor to Smith through this entire time, but at less than a full-time basis. This gave me time to develop business with other clients. 

So, what happened to the Bit Record Database? In 1988, while I winding down my work on the system, Steve and Jim delivered a presentation at the IADC/SPE Drilling Conference. They reported that the system contained 100,000 bit records. They also reported that the team had built an interface from the mainframe to PCs running dBase in field offices. This was how they were preparing bit programs for new wells. 

Then, in the mid-1990s, I got in touch again with Steve, who told me that Smith had migrated the system from the IBM mainframe to a personal computer running the Progress database. 

So, in writing this post, I got curious: Where is the Bit Record System today? Smith was acquired by Schlumberger in 2010, who rebranded the Smith Tool business as Smith Bits. A little digging uncovered a recent edition of the Smith Bits product catalog, and it has an interesting page on something called the “DRS drilling record system.” 

The Smith Bits DRS drilling record system is a collection of nearly 3 million bit runs from virtually every oil and gas field in the world. The database was initiated in May 1985, and since that time, records have been continuously added for oil, gas, and geothermal wells. With this detailed data and the capabilities of the IDEAS platform, Smith Bits engineers can simulate bit performance and make changes to their bit designs to optimize performance in a specific application. [Emphasis added]

With that date of May 1985, I have no doubt that this is the successor to the Bit Record Database. It is interesting that Schlumberger has renamed the system as the Drilling Record System. It may be because even in my original design the system included data on bottom hole assembly tools other than rock bits and other drilling data such as hydraulics. We called it the Bit Record Database because the form that the system was based on was commonly called a bit record. A DRS screen shot is shown below (click to enlarge). 

DRS Drilling Record System screen shot

Update, Aug. 13, 2022. I have now reconnected with my old teammate, Steve Steinke, who retired two years ago from Schlumberger's Smith Bits group. Steve worked with the DRS system over all those years since we were together. Steve confirmed my recollection of our discussion in the early 1990s that Smith converted the system to a single PC running the Progress database. The main motivation for this was to get off the mainframe. Then around 1999, Smith rewrote the system on an Oracle platform. At the same time, they greatly expanded its functionality to include records of other downhole tools besides rock bits. The team continued to expand the system to include records of other drilling equipment and systems as well. It now even includes geological data, such as formations encountered at various depths. Today it contains something like 1.5 million wells and is used by other Schlumberger business units in addition to Smith Bits. 

In an interesting side note, Steve confirms that the worldwide geographic location coding system I developed is still part of the system design. But Steve personally enhanced the design to automatically derive latitude-longitude from section-township-range, to more easily identify offset wells. 

In any event, I am proud that the system development work I did in the 1980s, over a period of about eight years, still continues to deliver value today. 


[1] The training sessions were not all technical. There were lessons on how to behave properly in the field, including advice such as, when driving through a gate on a cattle ranch, be sure to close the gate behind you.  Another lesson told us not to beg for business or claim that you’ll get fired if you don’t make the sale—unless that’s the only way to close the deal. There was another lesson with a pamphlet entitled, “How to Turn WAGs into SWAGs,” where a SWAG is a scientific WAG. It had something to do with using data in sales proposals. We also learned that in the early days, Smith was known as the “Whisky Bit,” because sales people would put a bottle of whisky in the pin of the bit. So, when the roughnecks would get thirsty, they’d say, “Let’s open one of them whisky bits.” 

[2] There was no significance to the word Beta. I didn’t have money to spend on a logo, so I figured I could get the printer to use the Greek letter beta in place of the normal font. That allowed me to use the business name as a logo. 

[3] Having at least a year of guaranteed contract work, maybe more, was a huge factor allowing me to break into consulting. A year earlier, our third child, Joanna, was born, and we had just bought our first home. Finances were tight. As it turned out, though, my work with Smith took me through most of the 1980s as I then began to add other clients, mostly through referrals from other “Smithereens” (people who had quit or left Smith during the rounds of layoffs). 

[4] Among other responsibilities, the Product Evaluation group provided post-mortem analysis of bits that failed in the field. They had a large room that they called the “morgue,” with bits that had failed, laid out in table top trays. The group included metallurgists and engineers that did root cause analysis to determine the causes of failures and make recommendations for changes in product design, manufacturing processes, and quality procedures.  

[5] This was a stressful time, with the Smith legal team often asking for additional ad-hoc analysis, sometimes just as I was about to leave for the day. But, to their credit, they did a good job keeping my name out of discovery so I wouldn’t have to be deposed. I think it helped that I was a contractor and not a Smith Tool employee. Not that we had anything to hide. But it wouldn’t have been a pleasant experience. Jim and Steve were deposed and testified in court. I got to see a trial transcript, and from what I read and what they told me, it was grueling. 

Photo Credit: Drill Rig, Pixabay

Friday, December 24, 2021

Cerner Acquisition to Launch Oracle Higher into Healthcare

Oracle Logo and Cerner Logo with medical doctor using a touch screen
Earlier this month, Oracle and Cerner jointly announced an agreement for Oracle to acquire Cerner, a provider of digital systems to healthcare providers. The deal of approximately $28 billion will be the largest in Oracle’s history, nearly three times the size of its PeopleSoft acquisition in 2005.

To understand the rationale behind the deal and what it means for the two companies, the industry, and especially for Cerner customers, we interviewed Avasant partners, consultants, and fellows who focus on the healthcare industry.  This research byte summarizes our point of view.

Read this post on the Avasant website: Cerner Acquisition to Launch Oracle Higher into Healthcare

Sunday, October 24, 2021

My First Encounter with Shadow IT

TRS-80 Home Computer
In my recent post on what I learned about enterprise IT at Smith Tool, I mentioned that I needed another few posts to cover some of the more interesting lessons learned. I already covered what I learned from a failed waterfall development project in 1980. But the lessons kept coming, shortly thereafter, in my first encounter with “shadow IT.” 

Shadow IT commonly refers to information systems that are purchased or developed apart from the corporate IT department. 

An Inventory System for Tooling

In 1981, I got a new assignment: to develop a system to manage inventory of perishable tooling in the manufacturing plant [1]. Our manufacturing systems, some of which I had developed, did a fairly good job of managing inventory of direct material—raw materials and parts that went directly into the finished product. But they did not yet fully manage the inventories of tooling that were needed to make those parts, such as cutting tools and grinding wheels. Managing tool inventory was important, because a stock-out of tooling could delay a production order just as much as a stock-out of direct material could. 

We had already built systems to maintain tooling bills of material (tool lists) and to associate those tool lists to production orders. We had also built a system to track tool usage on production orders. But we had not yet closed the loop to track on-hand inventory of tooling and to plan for replenishment based on production plans. The existing manual system was nothing more than an intricate paper-based min/max system that required a physical inventory count three times a day! Expediting to cover shortages of tooling was a way of life. As a result, my analysis showed that 90% of manufacturing production order delays were the result of tooling being unavailable. The benefits of an automated system would be huge [2]. 

Some perishable tooling items were purchased. The rest were fabricated in Smith’s own tool-making shop or subcontracted to outside tool makers. The tool room was, in essence, a factory within a factory, and it would ultimately require a manufacturing planning and control system linked to the main MRP system at Smith Tool. This is a key point in the lesson learned to come. 

A Startling Discovery

My first step was to take a walk out to the tool crib to meet with the Manager of Tool Control (“Fred”), gather some data, and talk to him about his requirements. The conversation, as I recall it, went something like this. 

Me: “Hi Fred, as you may have heard, we’re starting to gather requirements for a new Tooling Inventory System to replace your manual system.” 

Fred: “Oh, no need to do that, we’re going to install the computer system that they’re using over in the San Bernardino plant.” 

Me: “Wait, what?” 

Fred: “Yeah, the guys over in San Bernardino didn’t want to duplicate our manual system, so one of the NC programmers put together a system on one of those TRS-80 computers you can get at RadioShack[3]. It only took him a few weeks to program it.”

Me: “Fred, wait a minute. I’m not out here setting up my own tool room. So, you guys shouldn’t be setting up your own computer system. That's my department.” 

After this, I managed to calm down and got a walk through to see the tool crib operations and to gather some sample documents. 

When I returned to the IT department, I went straight to Rodger’s office to tell him what happened. He told me that I’d better take the 90-minute drive out to San Bernardino to see this rogue TRS-80 system. 

Evaluating the Shadow IT System

The IBM PC would be on the market in just a few months (August, 1981), opening the floodgates to what would later be called end-user computing. But this was my first encounter. What stunned me the most was not just that my users had usurped my job of systems development but that they appeared to have done in a few weeks what we had planned to do in a six month effort. However, as I would soon learn, the scope of what they had done on the TRS-80 fell far short of what we were planning to do on the mainframe. 

The first thing I saw involved limitations of the TRS-80 hardware as a business computing platform [4].  These were easy observations. But, as we all know, personal computers would soon overcome those limitations and become a real disruption to mainframe computers.

My more strategic observation, however, was that the TRS-80 system only addressed a single need in what should be a closed-loop end-to-end process for managing tooling. There were no tooling bills-of-material (tool lists), no tracking of tool usage, no association of tool lists to production orders (at multiple revision levels), and no determination of tooling demand based on planned or released production orders—all functionality that we had already built or were about build on the mainframe system. I concluded: 

The TRS-80 system in use now by San Bernardino Tool Control basically serves their need for the maintenance of inventory data. It is a simple alternative to the card file used by Irvine Tool Control for this data. However, the long-range use of the TRS-80 has serious limitations as outlined above.

Beginning of a Major Disruption in Enterprise IT

Although I didn't realize it at the time, the world of corporate IT was changing. In reviewing an early version of this post, my manager, Rodger Beard, offered this analysis (lightly edited). 
Our Smith Tool TRS-80 experience demonstrated a trend that was already unfolding regarding the nature of business computing.  Dramatically cheaper computing hardware and operating system software had already started to come on scene with the introduction of mini-computers, from DEC and HP especially. But the TRS-80 and IBM's hurried, clumsy, poorly conceived PC initiative soon after had a far, far bigger impact.  Mini- and micro-computers enabled the rapid movement away from the IBM computing castles that were then the norm, with budgets that only kings could afford.  Because the dollars were so great and castles took so long to build (with high implementation costs and high risk of failure) there was a critical business need for better and cheaper ways to deploy automated business systems.  The TRS-80 and then PCs offered a way to fulfill that need, and to work around IT departments that were mostly seen as being in the way.
That said, low-cost hardware was just the first leg of a 3-legged stool of disruptive technological innovations that would become manifest over the coming years. The second leg was the faster development time that these platforms offered to build business software.  The TRS-80s at Smith Tool clearly demonstrated that software could be developed more cheaply, faster, and more easily, albeit with certain downsides that we, the knights guarding the IT castle, thought were important, but not as important to many in the business. 
The point is that shadow IT was conceived as the direct solution to an already very well known problem. There was too high a cost, as well as too much delay and pent-up demand for business software.  Packaged software suites, higher level and advanced programming languages, 4GLs like Focus,  emerging SDMs, software engineering and coding being taught in public schools, software training as a industry, computer engineering majors offered on every college campus, H1b visas, outsourcing, and then offshoring, all were solutions intended to solve this problem.  Net of it all, software cost is now tiny compared to when we found out about the TRS-80..

The third leg of the new stool was obviously overcoming the primitive data communications networks of the time as well as costs and delays associated with creating node-to-node communications.  The creation of the internet, and with it fast, cheap, available connectivity was the disruptive change that gave the new IT stool all three legs. (Wow did it ever!)

Lesson Learned: Bring Shadow IT out of the Shadows

Rodger is right. Although my analysis of the TRS-80 system may have been correct in identifying its shortcomings, it took me a few more years to understand the bigger picture. When the business has a need and the corporate IT organization does not have the resources to meet that need, the business will find a way to solve the problem. The days when the IT organization could just say no, or wait until next year, were coming to an end. 

Ultimately, the technology disruption brought its own new set of challenges. For example, user departments purchasing or building their own software, were soon asking for the IT organization to connect them to corporate systems. Many IT leaders were understandably distraught with these requests when they were not involved in the original development or procurement of the shadow system.   
Over the years, the most healthy way to deal with shadow IT is to bring it out of the shadows. It is really a matter of IT governance. Best practices in dealing with shadow IT include having a multi-year IT strategic plan that addresses major needs throughout the organization, guidelines to determine which systems are best deployed by corporate IT and which can be left to end-user development, an overall enterprise architecture, and budgetary flexibility where in some cases the business funds new system development, with the IT organization delivering or managing the services. 

Postscript: My recollection is fuzzy concerning the events that followed. My personnel file indicates I finished implementing the corporate tooling inventory system in 1982, and I moved on to an even more interesting project. But this all took place just before the multi-year decline in oil prices and collapse in the US drilling rig count, which devastated Smith’s business. The San Bernardino plant was shut down, so the use of the TRS-80 system became moot. 

End Notes

[1] In reviewing the system specification I wrote for this project, I notice that I applied the lesson learned from my previous project, where we had a failure with the traditional waterfall development approach. In my system specification for this project, I wrote: 

“Because a project addressing all the known requirements for a tooling inventory system would take over one calendar year to develop, we have adopted a two-phase approach. This allows Manufacturing Services to receive benefits from the project within six months of product initiation. It also allows us to evaluate the use and effectiveness of the system delivered in the first phase before beginning the second." [Emphasis added.] 

In other words, I was determined to test the users’ commitment to adopt initial capabilities of the new system before IT would spend the time and effort to develop the rest of it.

[2] When I started writing this post, I assumed that tooling inventory control systems would be commonplace today as modules within manufacturing ERP systems. Although SAP appears to have a solution, I am hard-pressed to find many others, outside of a few point solutions. I have a feeling that many customers today manage tooling inventory as a special item type in the production bill of material, which may be adequate for many manufacturers, although this approach has its shortcomings. If readers have insights on this, please leave a comment on this post. 

[3] RadioShack, founded in 1921, was a one-stop shop for all things electronic, from components to personal electronics to micro-computers. It essentially went out of business in 2015. The TRS-80, launched in 1977, was one of the first widely available microcomputers. 

[4] As I wrote in my trip report, 

“The TRS-80 is a microprocessor, and it is not designed for large-scale business systems. It has limitations on file sizes and key lengths…. There are limitations in real storage…. The hardware is designed to be run only 8-10 hours a day. It is designed for the occasional hobbyist or for a light back-office business, not for the day-to-day operation of a heavy manufacturer.” 

Moreover, the user-developed tooling system was only intended to satisfy needs around tooling procurement and inventory control. There was no functionality for tooling bills of material (tool lists) or ability to associate them with production orders. In Irvine, we had already built a system to automate these functions on the mainframe, but San Bernardino de-automated those functions and put them back onto a paper-based system. 

Image Credit 

TRS-80. Attribution: Blake Patterson. Source: Wikipedia Commons.

Saturday, September 18, 2021

What I Learned from a Waterfall Project Failure

In my most recent post on lessons learned in my career, I covered my time as an IT employee at Smith Tool. I learned so much in those years, and I need another few posts to cover some of the more interesting lessons.

By 1980, my work in manufacturing systems had been all in machining operations. Now Rodger gave me a new assignment, to develop a new system to support Smith’s forge plant. This opportunity took me upstream into the forge, which I was told was the largest forge in the United States west of the Mississippi1.

As I noted in the previous post, I loved the nitty-gritty sights and sounds of the metalworking plant. But the forge was another whole level of physicality, almost violent. As you approached the forge, you could hear the hammer and feel the ground shake as the press hammered out parts2. And outside the plant were pallets of newly forged parts, still red hot to the point where you could not stand closer than 20 or 30 yards without feeling the heat.

Here is a good video of a forge plant that gives a sense for what it’s like to be inside one. The press in this video is smaller than the two at Smith. Also, our forge plant was more modern and the parts being forged in the video are different, but the sights and sounds are the same.

Forging Operations Are Tricky to Schedule

Smith used a process called closed die forging. This means that the red-hot steel bar would be pressed into a die in the shape of the finished part. The tricky part is that the die would only be able to produce a given number of forgings before it had to be sent out for “resinking,” a sharpening operation, so that they could produce more forgings. The production scheduler used manual log books to keep track of how much life was left in each die and to know which dies had enough life left to fill an order. But if the log books were not updated correctly, the forge might not be able to meet its production schedule. This was the process the new system would automate, with the benefits of being able to better meet production schedules.

The new system was described in a company newsletter after the project was (supposedly) completed.
Simply stated, the Forge Die Tracking System keeps track of over 3,000 components that make up the 300 forge dies used to stamp out the various forgings to make Smith Tool’s products. Additionally, the system makes the die selections to fill each forge order according to which dies most closely match in wear and on the basis of how much life the dies have left. When the system notes that a die’s life is getting low, it will suggest to the scheduler that the die be used to make as many forging as is left in it to make, and then be sent out for resinking. So, while the order is being filled, the excess forgings being produced will go into inventory as safety stock.
I was assigned as project manager and lead analyst six months into the project, which had stalled for lack of someone who could develop the calculations for picking the best set of dies for a given order, suggesting the optimum production quantity to “run out” the die, and to track dies sent out for resinking.

I took it as a challenge. I remember distinctly that this was taking place during the run up to the US 1980 presidential election. So, I wrote my programming specifications with references to identifying candidate dies, nominating them, and then electing the best one. Although I had three programmers reporting to me for this project, I wrote some of that core scheduling logic myself.

The New System Installed but Not Implemented

So, what went wrong? There is a little hint of the problem in the final paragraph of that newsletter story on the project:
All the die structures will be on the system by the end of November. The history of all those components will be on by mid-December. The Forge personnel are enthusiastically anticipating the ease and efficiency the system will bring to the Forge operation. [Emphasis added]
Note the future tense.

Here’s how it went down. We put the system into production, and each week I would follow up with the director of forge operations to see how they were coming along with loading the die information. He had assigned this job to his administrative assistant, but the director told me she was too busy to get to it. After several weeks of follow up, it was clear that they had no intention of loading the master file data that the system would need to start scheduling the forge. The system became a very expensive piece of shelfware. I don’t recall how long they let the system continue to run in production without any data to process. But I was soon assigned to another development project.

So, how did this project get approved in the first place? Later I found out that the forge director felt the IT department was spending all its time developing systems for the machining plant, and that it was “his turn” for a new system. The IT steering committee complied.

Lesson Learned: Test User Commitment through Phased Implementation. This was my first real experience with the drawbacks of a waterfall development approach, where you define all the requirements up front, then design the entire system, then program it, test it, and deploy it. In this case, the users were happy to meet with us to provide their requirements and review our system design. But in terms of actually doing any real work, the users were off the hook until we put the system into production. At that point, they were not willing to let a low-level administrative assistant spend the time to do the necessary data entry, or hire a temp worker to backfill her regular duties so she could do so.

After that I vowed never again. What we should have done is build the database and then have the users enter the master file data before we invested more time programming the scheduling logic—the really difficult part, where the bulk of the development hours would go. That would have tested the users’ commitment and saved several months of wasted effort. This was 20 years before the Agile Manifesto, but my software engineering courses at UCI had already taught me about Barry Boehm’s spiral development methodology, which in many ways anticipated Agile. If only I had the foresight and permission to take this approach.

Postscript: In reviewing a draft of this post, the department manager at the time, Rodger Beard, has further recollections. He writes (lightly edited):
I felt at the time and actually still feel a sense of personal failure for allowing this project to unfold the way it did. Exactly the way you've described. A painful recognition, over many months, that good work was being completely wasted.

This was my first experience with having pure politics result in a significant waste of IT resources. However, like you, I learned from it. An aside, at the time, I also felt this system was not well conceived. If it were well deployed, it could have had an excellent ROI. But with the caveat that additional, easy-to-avoid ongoing human capital investment would be necessary to make it pay off. A red flag really.

I think you're spot on regarding how the requirement should have been addressed. This was definitely mine (especially) as well as [name withheld]’s leadership error. We knew it was an "it's my turn" situation. [Our leaders] had decided to throw the forge a bone to shut up the forge director. If only I had had the foresight to ask you to do what your article says should have been done. Sigh.
Thanks again, Rodger, for your confirmation in this series.   

End Notes

1The forge was about 200 yards from the Smith Tool metalworking on Von Karmen Avenue in what used to be known as the Irvine Industrial Complex. Driving through that area today, now known as the Irvine Business Complex, it’s hard to believe there was such heavy manufacturing there into the 1980s. That part of Irvine is now mostly commercial offices and some distribution or light manufacturing facilities. The metalworking plant today is an Amazon distribution center. 

2The purpose of forging is to form the raw material into a part that has dimensions close to what is needed so that the first machining operation only needs to remove a minimum amount of metal. Forging also improves the metallurgical properties of the material. It is the same process as the ancient blacksmith employed with his hammer to make metal implements, such as horseshoes. In fact, Smith Tool began as a blacksmith shop in 1902 in Whittier, CA.

Photo Credit

Waterfall. The original uploader was PaulHoadley at English Wikipedia., CC BY-SA 2.5, via Wikimedia Commons