Tuesday, February 14, 2023

ChatGPT for Industry Research: Not Ready for Prime Time

ChatGPT: Four Errors in a Single Sentence
Since its unveiling in November 2022, there has been a lot of buzz around ChatGPT—OpenAI’s generative AI chatbot. Although there are similar projects, such as Google’s Bard, ChatGPT got an enormous boost as it was free, easy to use, and could be deployed in a variety of ways—from answering questions to writing essays, poems, and song lyrics, to coming up with recipes, to writing software code, based on simple prompts from the user. 

One inevitable question of course was, to what extent would ChatGPT threaten jobs?

For me, it’s an important question. I direct a team of IT and industry research analysts at Avasant. Is ChatGPT a disruptive threat to our business? Would competitors, with less-skilled workers, be able to generate research as good or in greater quantities than what we could produce? Would clients even need research firms like ours at all, if they could generate their own reports, using these tools? Or, can we leverage a tool like ChatGPT to make us more productive? 

Putting ChatGPT to the Test

To find out, I did a series of tests to see if ChatGPT could write a short research note. 

One of my tests was to ask ChatGPT to write about something I am intimately familiar with: The history of the IT research firm Computer Economics, which my partner and I acquired in 2005 and I sold to Avasant in 2020. (I still manage this business as part of Avasant Research.) 

Here is the transcript of my prompts and the responses from ChatGPT, lightly edited for length, along with my evaluation of ChatGPT. 

Prompt: “Provide a history of the IT research firm Computer Economics.”

ChatGPT responded: 

Computer Economics is a research and consulting firm specializing in the field of information technology (IT). The company was founded in 1982 in Newport Beach, California, by John L_________, who currently serves as the firm's President and CEO. 

There are four factual errors in the second sentence of this opening paragraph! Computer Economics was founded in 1979, not 1982, as documented in this Wikipedia entry. (Wikipedia is a primary source for ChatGPT's training data, so how did it miss this?) Moreover, the firm was founded in New York City, not Newport Beach, by Bruno Bassi, not John L., and John L. was never the company president. He was our VP of Research, from 2007 until he retired in 2017. ChatGPT just made this all up. 

But wait, there's more! A couple of paragraphs later, ChatGPT writes: 

Computer Economics … has received several awards for its work, including recognition from Forrester Research as a “Leading IT Research and Advisory Firm” and from Gartner as a “Top IT Research and Consulting Firm.” 

But neither Forrester nor Gartner has any such award program. In fact, it would be hilarious if they did since they are both IT research and consulting/advisory firms themselves! ChatGPT just made up that part. So, I gave ChatGPT another prompt and a chance to correct itself. 

Prompt: “Please provide more information about Gartner's award program for Top IT Research and Consulting firms. When was this program established and who were some of the most recent winners of this award?” 

ChatGPT responded (click to enlarge): 

ChatGPT Hallucinations
ChatGPT Hallucinations

Apparently, ChatGPT is not aware of the First Law of Holes: When you find yourself in one, stop digging. 

My prompt asked who some recent award winners were. Now it says the winners are not publicly available. What kind of award keeps the winners secret? Moreover, if the winners are secret, how does it know Computer Economics was one of them? At the same time, the winners must not be secret, because they “can be found in Gartner’s annual report on the market for IT research and consulting services” (which, of course, does not exist).

Risks in the Use of ChatGPT for Research

In summary, here are some observations on the risks of using ChatGPT as a virtual research analyst.  

  1. Fiction parading as fact. As shown above, ChatGPT is prone to simply make up stuff. When it does, it declares it with confidence—what some have called hallucinations. Whatever savings a research firm might gain in analyst productivity it might lose in fact-checking since you can’t trust anything it says. If ChatGPT says the sun rises in the east, you might want to go outside tomorrow morning to double-check it.  
  2. Lack of citations. Fiction parading as fact might not be so bad if ChatGPT would cite its sources, but it refuses to say where it got its information, even when asked to do so. In AI terms, it violates the four principles of explainability
  3. Risk of plagiarism. Lack of citations means you can never be sure if ChatGPT is committing plagiarism. It never uses direct quotes, so it most likely is paraphrasing from one or multiple sources. But this can be difficult to spot. More concerning, it might be copying an original idea or insight from some other author, opening the door to the misappropriation of copyrighted material. 

Possible Limited Uses for ChatGPT

We are still in the early days of generative AI, and it will no doubt get better in the coming years. So, perhaps there may be some limited uses for ChatGPT in writing research. Here are two ideas. 

The first use might be simply to help overcome writer’s block. We all know what it’s like to start with a blank sheet of paper. ChatGPT might be able to offer a starting point for a blog post or research note, especially for the introduction, which the analyst could then refine. 

An additional use case might be to use ChatGPT to help come up with a structure for a research note. To test this, I thought about writing a blog post on the recent layoffs in the tech industry. I had some ideas on what to write but wanted to see if ChatGPT could come up with a coherent structure. So, I gave it a list of tech companies that had recently announced layoffs. Then I gave it some additional prompts: 

  • What do these companies have in common? Or are the reasons for the layoffs different for some of them? 
  • As a counterpoint, include some examples of tech companies that are hiring.
  • Talk about how these layoffs go against the concept of a company being a family. Families do not lay off family members when times are tight. 
  • Point out that many employees in the tech industry have never experienced a downturn and this is something that they are not used to dealing with.

The result was not bad. With a little editing, rearranging, and rewriting it could make a passable piece of news analysis. As noted earlier, however, the results would need to be carefully fact-checked, and citations might need to be added. 

One word of warning, however: In order to learn, young writers need to struggle a little, whether it is by having to stare at a blank sheet of paper or constructing a narrative. I am concerned that the overuse of tools like ChatGPT could deny junior analysts the experience they need to learn to write and think for themselves. 

The larger lesson here is that you can’t just ask ChatGPT to come up with a research note on its own. You must have an idea and a point of view and give ChatGPT something to work with. In other words, treat ChatGPT as a research assistant. You still need to be the analyst, and you need to make the work product your own. 

I will be experimenting more with ChatGPT in the near future. Hopefully, improvements in the tool will mitigate the problems and risks.

Update Feb. 20, 2023: Jon Reed has posted two lengthy comments on this post with good feedback. Check them out below in the comments section. 

Sunday, October 09, 2022

What If You Held a Metaverse Party and Nobody Came?

The metaverse just might be the next big thing, but according to two reports this week, that time is not yet. 

The first story is from CoinDesk, which reports that the two leading decentralized metaverse platforms--Decentraland and The Sandbox average below 1,000 daily users. Yet each is a unicorn, with over $1 billion in valuation. 

What’s going on in the metaverse these days, you might ask. Looking at two of the biggest companies with over $1 billion valuations, the answer is surprising: Not much, or at least not enough to bring users back every day. According to data from DappRadar, the Ethereum-based virtual world Decentraland had 38 active users in the past 24 hours, while competitor The Sandbox boasted 522 active users in that same time.

An active user, according to DappRadar, is defined as a unique wallet address' interaction with the platform’s smart contract.

This matches my own observation a few weeks ago when I created an account on Decentraland. Apart from the clunky graphics, the thing that struck me was, there's no one here! Until I read the CoinDesk report, I thought maybe I was doing it wrong. But apparently not.  

So, maybe the centralized metaverse platforms, such as the Meta (formerly Facebook) Horizon Worlds platform, is where the action is.  Apparently not. According to this report on The Verge, the user experience on Horizon Worlds is so bad that management under Mark Zuckerberg has to encourage, cajole, and beg even its metaverse developers to use it.   

In a follow-up memo dated September 30th, Shah said that employees still weren’t using Horizon enough, writing that a plan was being made to “hold managers accountable” for having their teams use Horizon at least once a week. “Everyone in this organization should make it their mission to fall in love with Horizon Worlds. You can’t do that without using it. Get in there. Organize times to do it with your colleagues or friends, in both internal builds but also the public build so you can interact with our community.”

On the other hand, we are already seeing real value in some early metaverse business applications. Two weeks ago, I co-moderated a metaverse panel discussion at Innovate@UCLA. One of the panelists, Chris Mattmann, Chief Technology and Innovation Officer at Jet Propulsion Laboratory described how JPL is already using metaverse-like digital worlds to great success for employee onboarding, virtual tours, and virtual meetings.  

Early adopters, like JPL, give an indication of where the value may lie. But for now, as far as public metaverse platforms go, it appears we are close to or at the peak of the hype cycle. 

On the third hand, I’ve been wrong before. As I wrote earlier this year: Predictions are hard, especially about the future.

Image Credit: Decentraland, via CoinDesk. 

Sunday, August 07, 2022

An Innovator’s Story: Creating a Business for Lasting Success

Back in May, I had the opportunity to do an on-stage interview with Jamie Siminoff, founder and CEO of Ring, as part of Avasant's Empowering Beyond Summit

Ring, the first provider of video doorbells, is an interesting case study in innovation. Siminoff founded the firm in 2013, and, despite walking away from an episode of Shark Tank with no money, grew it to disrupt the home security industry.

Siminoff eventually sold Ring to Amazon in 2018 for over $1 billion. Now, under Amazon’s ownership, he continues to manage Ring, which has grown to be the largest home security camera brand in the world.

Over on the Avasant website, I put together a summary of  Siminoff’s keynote and my on-stage interview around two broad themes:

  • Lessons learned in innovation, based on Ring’s invention. 
  • How to ensure success when an innovative startup is acquired by a much larger enterprise.

The research byte concludes with Siminoff’s view on how business leaders in traditional organizations can apply the lessons in innovation.

Read the research byte on the Avasant website: An Innovator’s Story: Creating a Business for Lasting Success

Sunday, May 15, 2022

Predictions Are Hard, especially about the Future

Gemco Membership Card
With nearly half a century in enterprise IT, I have had plenty of time to see how technology predictions over the years have been fulfilled—or not fulfilled. This was brought home to me recently while reviewing an old project document.

But first, some context. As noted in my previous post, I felt forced by a business downturn in 1983 to resign from Smith Tool and take an IT manager position at Gemco, a now defunct membership department store, then owned by Lucky Stores. This returned me to my retail roots.

A Prescient Prediction

Although I only stayed at Gemco a few months, I was put in charge of a strategic systems project: To define the requirements for a new merchandising system. We started by interviewing the senior leaders of the firm and worked our way up the organization until we reached the final interview with the CEO, Peter Harris [1].

The interview summary, dated October 18, 1983, is quite interesting, especially in one paragraph where Harris said:
We need to recognize the changes that will come in the next decade due to the spread of advanced telecommunications. It is likely that 50% to 70% of basic hardgoods and commodities will be purchased from home, eliminating the need for store visits. However, apparel and other fashion merchandise will continue to be purchased in store environments, because of the psychological need to “go shopping.”
Today, I do not recall anyone in the retail industry in the early 1980s predicting the dawn of B2C e-commerce. And apparently, even 10 years later, I was still a skeptic. In the margin of that final report, there appears a note, in my own handwriting.
How wrong he turned out to be! –FS, 3/15/93 (10 years later!)
Peter Harris Interview Quote

But little did I know, 1993 was the year that the U.S Congress passed a law to commercialize the Internet, and it was also the year that Tim Berners-Lee invented the World Wide Web. And, one year later, Jeff Bezos founded Amazon. But it took another two decades before a worldwide pandemic pushed B2C e-commerce for certain categories of goods to the levels that Peter Harris predicted nearly 40 years earlier.

So, no, Harris’s prediction was not wrong. He was just off by about 30 years.

Lesson Learned: Keep an Open Mind

As Yogi Berra once said, predictions are hard, especially about the future. Like many others, I tend to be a skeptic, always looking for the negative side of an idea, or what could go wrong. In fact, a few years ago, I wrote a blog post mocking fellow analysts who make year-end predictions. I don't like to make predictions myself and I tend to be skeptical of those who do make them. I have to make a conscious effort to fight this tendency.

So what predictions are out there that might seem far-fetched today but could eventually come into realization?
  • The Metaverse. There are many breathless predictions these days about “the metaverse,” a virtual world where people and organizations can live and interact in a persistent and immersive 3D environment, where they can own virtual property, trade virtual goods, and be educated or entertained. Some argue that the metaverse already exists with various gaming platforms. Others think it is being overhyped by social media companies, such as Facebook (now branded as Meta) that are otherwise out of ideas about how to keep people engaged on their platforms in order to target them for advertisements.
  • Non-Fungible Tokens. NFTs have been a hot market over the past year, with sales of digital art, secured by NFTs on a blockchain, trading for thousands or millions of dollars. The fact that any piece of digital art can be saved with a mouse right-click makes it difficult to understand what exactly an NFT denotes in terms of ownership. The recent and rapid decline in the value of various NFTs confirms to skeptics that they are nothing more than the 21st century equivalent of Tulipmania.
  • Cryptocurrencies. Digital currencies using cryptography, such as Bitcoin, are built using blockchain technology. In contrast to fiat money, such as the US Dollar, they are not backed by a central government but are decentralized, permissionless, and virtually impossible to corrupt. Advocates predict they will replace fiat money, or at least exist alongside it, providing a hedge against inflation and very low transaction costs compared to traditional currency exchanges. At this writing, there is a collapse in cryptocurrency markets, confirming the view of crypto-critics that the whole thing is one big bubble.
It is easy to be a critic, or as Ed Debono taught, to put on the black hat. It is not so easy to see the problems with an idea while at the same time seeing where there could be value. It is even more difficult to predict when exactly that value might be realized.

Sometimes, predictions are not wrong. They just take longer than we think to be realized.


[1] Peter Harris is an interesting person, starting as a stocking clerk at Gemco and eventually working his way up to President from 1980 to 1984, when the firm achieved $2.2 billion in revenues. Later, he and his partner acquired FAO Shwarz, where he took over as CEO until 1992. Later, he became the President and CEO of the San Francisco 49ers (2000-2004), and held several other leadership positions after that. Today, he is retired and serves on several boards, including Palo Alto Medical Foundation. He is still on LinkedIn.

Update, May 22, 2022 

One of the joys in writing this series of career posts is reconnecting with people I worked with decades ago.  So, I sent a message to Peter Harris on LinkedIn.
Peter, I'm sure you don't remember me, but I interviewed you in 1983 at Gemco. I just wrote a blog post about your prediction about E-commerce. [Link to this post.] Let me know any feedback. --Frank
This morning he wrote back:

Frank, I am absolutely blown away to hear from you and read of your perspective, highlighted of course by your absolutely amazing record keeping mention of something I said many years ago.  While I think 30 years early doesn't count as anything beyond being impracticably thoughtful, I was honored and  hugely appreciative to be recognized.  Your article is fascinating and I am now following you so that I might observe and learn from your thinking and musings on other topics.  That you have tracked me down on LinkedIn and shared it means a lot.  The appropriate comments are "way cool," "awesome" or maybe even "wowza."  Thank you so very much.   I'd be interested to hear a bit more than is visible on LinkedIn about what you are doing now if you have time to share. --Peter

[Posted with Peter's permission.]

Update, Aug. 8, 2022

The same year, 1983, Mark Dertouzos made this incredible prediction of the World Wide Web. Click to watch. 
Mark Dertouzos video thumbnail

Photo Credits:

Wednesday, April 20, 2022

The Most Significant System Development Project of My Career

Drill rig
This post continues my series on lessons learned in my nearly half century in enterprise IT. We started in 1974 with my job at Macy’s headquarters in Manhattan, followed by my move to California in 1976 and my job at TRW Credit Data. I then took a job at Smith Tool in 1978, where I got thrown into the deep end with manufacturing systems. This led to several more important lessons learned, including the failure of a waterfall development project and my first encounter with shadow IT.

But there were more lessons to be learned at Smith Tool. 

Next would be the biggest and most important project of my career. Rolling off a series of manufacturing system development projects, I was now assigned to a task force to develop a strategic system to analyze the performance of Smith’s drill bits in the field. I would be the project manager for a small team of developers and the overall system architect. 

An Unspoken Objective

The first phase was to build a bit record database, which would become the foundation for several future systems. The database, which would ultimately contain millions of historical drilling records from around the world, would be used for preparing well proposals, evaluating product performance, conducting competitive analysis, and providing a feedback loop from the field to engineering to improve product quality. 

But there was another, unstated objective. Smith Tool had been sued for patent infringement by Hughes Tool (the business that made Howard Hughes his initial fortune). The patent was for a novel application of an O-ring, which sealed the lubricated bearing of the three roller cones from the harsh downhole conditions. O-rings (made famous for their failure in the space shuttle Challenger disaster) were in common use at the time, but Hughes had discovered that if you squeezed the O-ring a bit it actually extended the life of the seal. This was counter-intuitive, but it worked. The litigation had been dragging on for over a decade, starting with Smith getting a federal court in 1979 to invalidate the patent, and Hughes getting a federal appeals court in 1982 to reverse that decision. That was just before I was assigned to the development project, which would be an important element in Smith’s defense. 

The lawsuit, for about $1 billion, was at that time the largest patent infringement case in history. The lawsuit alleged that Smith’s use of the Hughes patent made Smith’s bits competitive with Hughes, earning Smith profits that it would otherwise not have earned. To defend against the Hughes claim, Smith would need a system to provide the data analysis. 

None of this was mentioned to me at the time. I only knew that the project was getting me a lot of attention from top management. In fact, my old manager, Rodger Beard, recently told me that at corporate headquarters they were talking about how my system would “save their bacon.” 

Lesson Learned: Immerse Yourself in the Business

Shortly after the project kick-off, I learned that there was a week-long training program about to start for new field sales people. I invited myself in and got to sit through detailed lessons on Smith’s products and how they were used by customers. I found the whole week fascinating. [1]

Halfway through the week, Dan Burtt, the IT director, noticed I was not at my desk and found out about the class. “Why is Frank taking sales training?” he asked. I managed to convince him to let me finish. 

Since I had been developing or maintaining many of Smith’s manufacturing systems, I already understood the engineering and production data that would be needed to correlate with field performance. What I lacked was an understanding of that field data. My degree in Geology helped, but all of this was mostly new information. 

There were also some thorny design problems, such as how to designate well locations in different parts of the world, using different coding schemes. I spent several hours at the UC Irvine library learning about various geographic location systems in use in the U.S. and around the world, such as the section-township-range system, originally proposed by Thomas Jefferson

In any new system development project, you have to start with a deep understanding of the business. It is not enough to have users tell you what they need. It’s more than gathering requirements. You have to have a sense of curiosity and immerse yourself in the industry and the business.

Lesson Learned: Take Advantage of Career Adversity

But the oil industry is notorious for booms and busts, and we were heading into a major bust. There was a massive company layoff, and the IT staff was not excluded. With fewer IT personnel, we didn’t need as many first level managers, so I was demoted back to project manager. Even worse, after I finished the requirements definition, my project was put on hold pending budget approval to move forward. This was the last straw. I resigned in August 1983 and returned to my retail industry roots, taking an IT manager position at Gemco, a now-defunct membership department store.

Beta Management Systems Logo

But, after a few months I got a call from Smith. The bit record project had been funded. Could I come back to lead it? I said yes, on one condition: I wanted to come back as a consultant, not an employee. I had been thinking for some time about a consulting career, and this was my opportunity. Smith agreed—I had so much knowledge of the project and the business requirements that it seemed like a small request. 

This launched my consulting career, as a sole proprietor doing business as Beta Management Systems. [2] [3]

Development, Implementation, and a Move into the Business

Now I was back at Smith, leading a small team of developers. I designed the system mostly as an online system (IBM’s CICS) but with a little batch programming to extract manufacturing and engineering data on a nightly basis. As usual, I wrote some of the most important code myself. The database was eventually going to hold millions of records, and it would be used for online analytical processing (OLAP), so it needed to be fast. I designed the database in IBM’s VSAM, and I set up alternate indices to provide quick access for the most common types of standard reporting. This was before the days of widespread use of relational databases, or at least before Smith had one. 

For the OLAP reporting, I used something new. The year before I had gotten trained in FOCUS, a fourth-generation language from Information Builders (acquired by TIBCO in 2021). This was an excellent tool for reporting and analysis, especially for ad-hoc inquires. This is how I would develop the OLAP reporting that would prove instrumental later on in supporting the patent litigation. 

Initial system development took less than a year. I still have a copy of the system user guide, dated November, 1984. Users began loading bit records in 1985. 

As soon as the system went into production, there was no more need for me in the IT department. But there was a huge need in the engineering department, where all that ad-hoc analysis would need to be done against the database. So, I left IT and went down the street to the “Hobie Cat Building” (the former owner) to begin as a consultant in the engineering group known as Product Evaluation. [4] 

Within Product Evaluation, I became part of a small team to develop the OLAP reporting for the bit record system. Looking back, this was the best experience I’ve ever had in a team. There was our manager, Jim Watson, who was a metallurgist by training and product failure analyst. Jim became a personal friend of mine over the years. There was Steve Steinke, a geologist, who provided the knowledge of the oil field. Rounding out the team was Joel Palmer, a statistician, who ensured that our analysis was statistically valid. Then there was me, the systems guy. 

Lesson Learned: Understand Basic Statistics

Textbook cover--Calculate Basic Statistics
Looking back, I now appreciate how the statistical validity of our analysis would be critical. This was important not only because we needed to ensure that the conclusions of our analysis were on a sound footing generally, but also because some of our analysis would be presented in court in Smith’s legal defense. 

I had started out as a math major at UPenn, but I’d never had a course in statistics. So, even though we had a statistician on our team, Smith brought in Dr. Mark Finkelstein, a mathematics professor from UC Irvine, to coach us once a week on basic statistics. He used his own text book, pictured nearby. We learned about descriptive statistics and inferential statistics, regression, correlation, and confidence intervals. 

The key point I learned was this: Just because a data set appears to show a correlation between two variables, it might not be statistically significant. For example, I might be asked to divide a sample of bit runs from a group of nearby wells into three groups according to some engineering parameter. My analysis might show that as the parameter increases, the bit performance improves. But that conclusion might be spurious. On more than one occasion I had to tell the requestor that, even though a graph might appear to support his theory, the statistics did not. 

Eventually, the Smith lawyers asked me to perform statistical analysis in support of the patent litigation. In response to the court ruling that Smith was infringing on the Hughes patent, Smith had redesigned its bits to use an older seal, called a Belleville seal, instead of the O-ring. Smith contended in court that the new seal provided performance equal to that of the O-ring, and my analysis supported that conclusion. But the new seal was more expensive than an O-ring, increasing the cost of a tricone bit by about $29. According to a Los Angeles Times account of the trial: 

According to [Judge] Hupp’s chronology of the events that led to Smith’s using Hughes’ patented device, Smith stopped manufacturing the Belleville-type seals in 1972, in part because they made the Smith device cost about $29.02, or an estimated 3.2% of the total purchase price, more than the competing Hughes product.

Smith’s attorneys argued, therefore, that the damages to be awarded Hughes should be calculated based on the difference in product cost for the half million infringing bits, or about $14.5 million, rather than the billion-plus that Hughes was claiming. 

Bottom line, as I was told: The judge agreed that the performance of the Belleville seal was equal to that of the O-ring but did not agree that damages should be based on the difference in cost. The judge assigned damages of just over $200 million. In other words, we won the battle that I was fighting, but lost the larger war. [5]

My appreciation of statistics would benefit me later in my career, when Dan Husiak and I acquired the IT research firm Computer Economics. I took over the research group, which collected and published metrics on IT spending and staffing. Many times, I was confronted with what appeared to be a correlation between IT spending and some other metric. My experience from Smith Tool taught me to be skeptical if the sample size was small. 

Postscript: Successor System Still Delivering Value

DRS Drilling Record System log in panel
The combination of the court judgment, a continuing downturn in the oil industry, and some poor business decisions was too much for Smith to overcome. The company filed for Chapter 11 bankruptcy protection, divested noncore businesses, and was able to come out of bankruptcy in the same year. I was still working as a contractor to Smith through this entire time, but at less than a full-time basis. This gave me time to develop business with other clients. 

So, what happened to the Bit Record Database? In 1988, while I winding down my work on the system, Steve and Jim delivered a presentation at the IADC/SPE Drilling Conference. They reported that the system contained 100,000 bit records. They also reported that the team had built an interface from the mainframe to PCs running dBase in field offices. This was how they were preparing bit programs for new wells. 

Then, in the mid-1990s, I got in touch again with Steve, who told me that Smith had migrated the system from the IBM mainframe to a personal computer running the Progress database. 

So, in writing this post, I got curious: Where is the Bit Record System today? Smith was acquired by Schlumberger in 2010, who rebranded the Smith Tool business as Smith Bits. A little digging uncovered a recent edition of the Smith Bits product catalog, and it has an interesting page on something called the “DRS drilling record system.” 

The Smith Bits DRS drilling record system is a collection of nearly 3 million bit runs from virtually every oil and gas field in the world. The database was initiated in May 1985, and since that time, records have been continuously added for oil, gas, and geothermal wells. With this detailed data and the capabilities of the IDEAS platform, Smith Bits engineers can simulate bit performance and make changes to their bit designs to optimize performance in a specific application. [Emphasis added]

With that date of May 1985, I have no doubt that this is the successor to the Bit Record Database. It is interesting that Schlumberger has renamed the system as the Drilling Record System. It may be because even in my original design the system included data on bottom hole assembly tools other than rock bits and other drilling data such as hydraulics. We called it the Bit Record Database because the form that the system was based on was commonly called a bit record. A DRS screen shot is shown below (click to enlarge). 

DRS Drilling Record System screen shot

Update, Aug. 13, 2022. I have now reconnected with my old teammate, Steve Steinke, who retired two years ago from Schlumberger's Smith Bits group. Steve worked with the DRS system over all those years since we were together. Steve confirmed my recollection of our discussion in the early 1990s that Smith converted the system to a single PC running the Progress database. The main motivation for this was to get off the mainframe. Then around 1999, Smith rewrote the system on an Oracle platform. At the same time, they greatly expanded its functionality to include records of other downhole tools besides rock bits. The team continued to expand the system to include records of other drilling equipment and systems as well. It now even includes geological data, such as formations encountered at various depths. Today it contains something like 1.5 million wells and is used by other Schlumberger business units in addition to Smith Bits. 

In an interesting side note, Steve confirms that the worldwide geographic location coding system I developed is still part of the system design. But Steve personally enhanced the design to automatically derive latitude-longitude from section-township-range, to more easily identify offset wells. 

In any event, I am proud that the system development work I did in the 1980s, over a period of about eight years, still continues to deliver value today. 


[1] The training sessions were not all technical. There were lessons on how to behave properly in the field, including advice such as, when driving through a gate on a cattle ranch, be sure to close the gate behind you.  Another lesson told us not to beg for business or claim that you’ll get fired if you don’t make the sale—unless that’s the only way to close the deal. There was another lesson with a pamphlet entitled, “How to Turn WAGs into SWAGs,” where a SWAG is a scientific WAG. It had something to do with using data in sales proposals. We also learned that in the early days, Smith was known as the “Whisky Bit,” because sales people would put a bottle of whisky in the pin of the bit. So, when the roughnecks would get thirsty, they’d say, “Let’s open one of them whisky bits.” 

[2] There was no significance to the word Beta. I didn’t have money to spend on a logo, so I figured I could get the printer to use the Greek letter beta in place of the normal font. That allowed me to use the business name as a logo. 

[3] Having at least a year of guaranteed contract work, maybe more, was a huge factor allowing me to break into consulting. A year earlier, our third child, Joanna, was born, and we had just bought our first home. Finances were tight. As it turned out, though, my work with Smith took me through most of the 1980s as I then began to add other clients, mostly through referrals from other “Smithereens” (people who had quit or left Smith during the rounds of layoffs). 

[4] Among other responsibilities, the Product Evaluation group provided post-mortem analysis of bits that failed in the field. They had a large room that they called the “morgue,” with bits that had failed, laid out in table top trays. The group included metallurgists and engineers that did root cause analysis to determine the causes of failures and make recommendations for changes in product design, manufacturing processes, and quality procedures.  

[5] This was a stressful time, with the Smith legal team often asking for additional ad-hoc analysis, sometimes just as I was about to leave for the day. But, to their credit, they did a good job keeping my name out of discovery so I wouldn’t have to be deposed. I think it helped that I was a contractor and not a Smith Tool employee. Not that we had anything to hide. But it wouldn’t have been a pleasant experience. Jim and Steve were deposed and testified in court. I got to see a trial transcript, and from what I read and what they told me, it was grueling. 

Photo Credit: Drill Rig, Pixabay

Friday, December 24, 2021

Cerner Acquisition to Launch Oracle Higher into Healthcare

Oracle Logo and Cerner Logo with medical doctor using a touch screen
Earlier this month, Oracle and Cerner jointly announced an agreement for Oracle to acquire Cerner, a provider of digital systems to healthcare providers. The deal of approximately $28 billion will be the largest in Oracle’s history, nearly three times the size of its PeopleSoft acquisition in 2005.

To understand the rationale behind the deal and what it means for the two companies, the industry, and especially for Cerner customers, we interviewed Avasant partners, consultants, and fellows who focus on the healthcare industry.  This research byte summarizes our point of view.

Read this post on the Avasant website: Cerner Acquisition to Launch Oracle Higher into Healthcare

Sunday, October 24, 2021

My First Encounter with Shadow IT

TRS-80 Home Computer
In my recent post on what I learned about enterprise IT at Smith Tool, I mentioned that I needed another few posts to cover some of the more interesting lessons learned. I already covered what I learned from a failed waterfall development project in 1980. But the lessons kept coming, shortly thereafter, in my first encounter with “shadow IT.” 

Shadow IT commonly refers to information systems that are purchased or developed apart from the corporate IT department. 

An Inventory System for Tooling

In 1981, I got a new assignment: to develop a system to manage inventory of perishable tooling in the manufacturing plant [1]. Our manufacturing systems, some of which I had developed, did a fairly good job of managing inventory of direct material—raw materials and parts that went directly into the finished product. But they did not yet fully manage the inventories of tooling that were needed to make those parts, such as cutting tools and grinding wheels. Managing tool inventory was important, because a stock-out of tooling could delay a production order just as much as a stock-out of direct material could. 

We had already built systems to maintain tooling bills of material (tool lists) and to associate those tool lists to production orders. We had also built a system to track tool usage on production orders. But we had not yet closed the loop to track on-hand inventory of tooling and to plan for replenishment based on production plans. The existing manual system was nothing more than an intricate paper-based min/max system that required a physical inventory count three times a day! Expediting to cover shortages of tooling was a way of life. As a result, my analysis showed that 90% of manufacturing production order delays were the result of tooling being unavailable. The benefits of an automated system would be huge [2]. 

Some perishable tooling items were purchased. The rest were fabricated in Smith’s own tool-making shop or subcontracted to outside tool makers. The tool room was, in essence, a factory within a factory, and it would ultimately require a manufacturing planning and control system linked to the main MRP system at Smith Tool. This is a key point in the lesson learned to come. 

A Startling Discovery

My first step was to take a walk out to the tool crib to meet with the Manager of Tool Control (“Fred”), gather some data, and talk to him about his requirements. The conversation, as I recall it, went something like this. 

Me: “Hi Fred, as you may have heard, we’re starting to gather requirements for a new Tooling Inventory System to replace your manual system.” 

Fred: “Oh, no need to do that, we’re going to install the computer system that they’re using over in the San Bernardino plant.” 

Me: “Wait, what?” 

Fred: “Yeah, the guys over in San Bernardino didn’t want to duplicate our manual system, so one of the NC programmers put together a system on one of those TRS-80 computers you can get at RadioShack[3]. It only took him a few weeks to program it.”

Me: “Fred, wait a minute. I’m not out here setting up my own tool room. So, you guys shouldn’t be setting up your own computer system. That's my department.” 

After this, I managed to calm down and got a walk through to see the tool crib operations and to gather some sample documents. 

When I returned to the IT department, I went straight to Rodger’s office to tell him what happened. He told me that I’d better take the 90-minute drive out to San Bernardino to see this rogue TRS-80 system. 

Evaluating the Shadow IT System

The IBM PC would be on the market in just a few months (August, 1981), opening the floodgates to what would later be called end-user computing. But this was my first encounter. What stunned me the most was not just that my users had usurped my job of systems development but that they appeared to have done in a few weeks what we had planned to do in a six month effort. However, as I would soon learn, the scope of what they had done on the TRS-80 fell far short of what we were planning to do on the mainframe. 

The first thing I saw involved limitations of the TRS-80 hardware as a business computing platform [4].  These were easy observations. But, as we all know, personal computers would soon overcome those limitations and become a real disruption to mainframe computers.

My more strategic observation, however, was that the TRS-80 system only addressed a single need in what should be a closed-loop end-to-end process for managing tooling. There were no tooling bills-of-material (tool lists), no tracking of tool usage, no association of tool lists to production orders (at multiple revision levels), and no determination of tooling demand based on planned or released production orders—all functionality that we had already built or were about build on the mainframe system. I concluded: 

The TRS-80 system in use now by San Bernardino Tool Control basically serves their need for the maintenance of inventory data. It is a simple alternative to the card file used by Irvine Tool Control for this data. However, the long-range use of the TRS-80 has serious limitations as outlined above.

Beginning of a Major Disruption in Enterprise IT

Although I didn't realize it at the time, the world of corporate IT was changing. In reviewing an early version of this post, my manager, Rodger Beard, offered this analysis (lightly edited). 
Our Smith Tool TRS-80 experience demonstrated a trend that was already unfolding regarding the nature of business computing.  Dramatically cheaper computing hardware and operating system software had already started to come on scene with the introduction of mini-computers, from DEC and HP especially. But the TRS-80 and IBM's hurried, clumsy, poorly conceived PC initiative soon after had a far, far bigger impact.  Mini- and micro-computers enabled the rapid movement away from the IBM computing castles that were then the norm, with budgets that only kings could afford.  Because the dollars were so great and castles took so long to build (with high implementation costs and high risk of failure) there was a critical business need for better and cheaper ways to deploy automated business systems.  The TRS-80 and then PCs offered a way to fulfill that need, and to work around IT departments that were mostly seen as being in the way.
That said, low-cost hardware was just the first leg of a 3-legged stool of disruptive technological innovations that would become manifest over the coming years. The second leg was the faster development time that these platforms offered to build business software.  The TRS-80s at Smith Tool clearly demonstrated that software could be developed more cheaply, faster, and more easily, albeit with certain downsides that we, the knights guarding the IT castle, thought were important, but not as important to many in the business. 
The point is that shadow IT was conceived as the direct solution to an already very well known problem. There was too high a cost, as well as too much delay and pent-up demand for business software.  Packaged software suites, higher level and advanced programming languages, 4GLs like Focus,  emerging SDMs, software engineering and coding being taught in public schools, software training as a industry, computer engineering majors offered on every college campus, H1b visas, outsourcing, and then offshoring, all were solutions intended to solve this problem.  Net of it all, software cost is now tiny compared to when we found out about the TRS-80..

The third leg of the new stool was obviously overcoming the primitive data communications networks of the time as well as costs and delays associated with creating node-to-node communications.  The creation of the internet, and with it fast, cheap, available connectivity was the disruptive change that gave the new IT stool all three legs. (Wow did it ever!)

Lesson Learned: Bring Shadow IT out of the Shadows

Rodger is right. Although my analysis of the TRS-80 system may have been correct in identifying its shortcomings, it took me a few more years to understand the bigger picture. When the business has a need and the corporate IT organization does not have the resources to meet that need, the business will find a way to solve the problem. The days when the IT organization could just say no, or wait until next year, were coming to an end. 

Ultimately, the technology disruption brought its own new set of challenges. For example, user departments purchasing or building their own software, were soon asking for the IT organization to connect them to corporate systems. Many IT leaders were understandably distraught with these requests when they were not involved in the original development or procurement of the shadow system.   
Over the years, the most healthy way to deal with shadow IT is to bring it out of the shadows. It is really a matter of IT governance. Best practices in dealing with shadow IT include having a multi-year IT strategic plan that addresses major needs throughout the organization, guidelines to determine which systems are best deployed by corporate IT and which can be left to end-user development, an overall enterprise architecture, and budgetary flexibility where in some cases the business funds new system development, with the IT organization delivering or managing the services. 

Postscript: My recollection is fuzzy concerning the events that followed. My personnel file indicates I finished implementing the corporate tooling inventory system in 1982, and I moved on to an even more interesting project. But this all took place just before the multi-year decline in oil prices and collapse in the US drilling rig count, which devastated Smith’s business. The San Bernardino plant was shut down, so the use of the TRS-80 system became moot. 

End Notes

[1] In reviewing the system specification I wrote for this project, I notice that I applied the lesson learned from my previous project, where we had a failure with the traditional waterfall development approach. In my system specification for this project, I wrote: 

“Because a project addressing all the known requirements for a tooling inventory system would take over one calendar year to develop, we have adopted a two-phase approach. This allows Manufacturing Services to receive benefits from the project within six months of product initiation. It also allows us to evaluate the use and effectiveness of the system delivered in the first phase before beginning the second." [Emphasis added.] 

In other words, I was determined to test the users’ commitment to adopt initial capabilities of the new system before IT would spend the time and effort to develop the rest of it.

[2] When I started writing this post, I assumed that tooling inventory control systems would be commonplace today as modules within manufacturing ERP systems. Although SAP appears to have a solution, I am hard-pressed to find many others, outside of a few point solutions. I have a feeling that many customers today manage tooling inventory as a special item type in the production bill of material, which may be adequate for many manufacturers, although this approach has its shortcomings. If readers have insights on this, please leave a comment on this post. 

[3] RadioShack, founded in 1921, was a one-stop shop for all things electronic, from components to personal electronics to micro-computers. It essentially went out of business in 2015. The TRS-80, launched in 1977, was one of the first widely available microcomputers. 

[4] As I wrote in my trip report, 

“The TRS-80 is a microprocessor, and it is not designed for large-scale business systems. It has limitations on file sizes and key lengths…. There are limitations in real storage…. The hardware is designed to be run only 8-10 hours a day. It is designed for the occasional hobbyist or for a light back-office business, not for the day-to-day operation of a heavy manufacturer.” 

Moreover, the user-developed tooling system was only intended to satisfy needs around tooling procurement and inventory control. There was no functionality for tooling bills of material (tool lists) or ability to associate them with production orders. In Irvine, we had already built a system to automate these functions on the mainframe, but San Bernardino de-automated those functions and put them back onto a paper-based system. 

Image Credit 

TRS-80. Attribution: Blake Patterson. Source: Wikipedia Commons.

Saturday, September 18, 2021

What I Learned from a Waterfall Project Failure

In my most recent post on lessons learned in my career, I covered my time as an IT employee at Smith Tool. I learned so much in those years, and I need another few posts to cover some of the more interesting lessons.

By 1980, my work in manufacturing systems had been all in machining operations. Now Rodger gave me a new assignment, to develop a new system to support Smith’s forge plant. This opportunity took me upstream into the forge, which I was told was the largest forge in the United States west of the Mississippi1.

As I noted in the previous post, I loved the nitty-gritty sights and sounds of the metalworking plant. But the forge was another whole level of physicality, almost violent. As you approached the forge, you could hear the hammer and feel the ground shake as the press hammered out parts2. And outside the plant were pallets of newly forged parts, still red hot to the point where you could not stand closer than 20 or 30 yards without feeling the heat.

Here is a good video of a forge plant that gives a sense for what it’s like to be inside one. The press in this video is smaller than the two at Smith. Also, our forge plant was more modern and the parts being forged in the video are different, but the sights and sounds are the same.

Forging Operations Are Tricky to Schedule

Smith used a process called closed die forging. This means that the red-hot steel bar would be pressed into a die in the shape of the finished part. The tricky part is that the die would only be able to produce a given number of forgings before it had to be sent out for “resinking,” a sharpening operation, so that they could produce more forgings. The production scheduler used manual log books to keep track of how much life was left in each die and to know which dies had enough life left to fill an order. But if the log books were not updated correctly, the forge might not be able to meet its production schedule. This was the process the new system would automate, with the benefits of being able to better meet production schedules.

The new system was described in a company newsletter after the project was (supposedly) completed.
Simply stated, the Forge Die Tracking System keeps track of over 3,000 components that make up the 300 forge dies used to stamp out the various forgings to make Smith Tool’s products. Additionally, the system makes the die selections to fill each forge order according to which dies most closely match in wear and on the basis of how much life the dies have left. When the system notes that a die’s life is getting low, it will suggest to the scheduler that the die be used to make as many forging as is left in it to make, and then be sent out for resinking. So, while the order is being filled, the excess forgings being produced will go into inventory as safety stock.
I was assigned as project manager and lead analyst six months into the project, which had stalled for lack of someone who could develop the calculations for picking the best set of dies for a given order, suggesting the optimum production quantity to “run out” the die, and to track dies sent out for resinking.

I took it as a challenge. I remember distinctly that this was taking place during the run up to the US 1980 presidential election. So, I wrote my programming specifications with references to identifying candidate dies, nominating them, and then electing the best one. Although I had three programmers reporting to me for this project, I wrote some of that core scheduling logic myself.

The New System Installed but Not Implemented

So, what went wrong? There is a little hint of the problem in the final paragraph of that newsletter story on the project:
All the die structures will be on the system by the end of November. The history of all those components will be on by mid-December. The Forge personnel are enthusiastically anticipating the ease and efficiency the system will bring to the Forge operation. [Emphasis added]
Note the future tense.

Here’s how it went down. We put the system into production, and each week I would follow up with the director of forge operations to see how they were coming along with loading the die information. He had assigned this job to his administrative assistant, but the director told me she was too busy to get to it. After several weeks of follow up, it was clear that they had no intention of loading the master file data that the system would need to start scheduling the forge. The system became a very expensive piece of shelfware. I don’t recall how long they let the system continue to run in production without any data to process. But I was soon assigned to another development project.

So, how did this project get approved in the first place? Later I found out that the forge director felt the IT department was spending all its time developing systems for the machining plant, and that it was “his turn” for a new system. The IT steering committee complied.

Lesson Learned: Test User Commitment through Phased Implementation. This was my first real experience with the drawbacks of a waterfall development approach, where you define all the requirements up front, then design the entire system, then program it, test it, and deploy it. In this case, the users were happy to meet with us to provide their requirements and review our system design. But in terms of actually doing any real work, the users were off the hook until we put the system into production. At that point, they were not willing to let a low-level administrative assistant spend the time to do the necessary data entry, or hire a temp worker to backfill her regular duties so she could do so.

After that I vowed never again. What we should have done is build the database and then have the users enter the master file data before we invested more time programming the scheduling logic—the really difficult part, where the bulk of the development hours would go. That would have tested the users’ commitment and saved several months of wasted effort. This was 20 years before the Agile Manifesto, but my software engineering courses at UCI had already taught me about Barry Boehm’s spiral development methodology, which in many ways anticipated Agile. If only I had the foresight and permission to take this approach.

Postscript: In reviewing a draft of this post, the department manager at the time, Rodger Beard, has further recollections. He writes (lightly edited):
I felt at the time and actually still feel a sense of personal failure for allowing this project to unfold the way it did. Exactly the way you've described. A painful recognition, over many months, that good work was being completely wasted.

This was my first experience with having pure politics result in a significant waste of IT resources. However, like you, I learned from it. An aside, at the time, I also felt this system was not well conceived. If it were well deployed, it could have had an excellent ROI. But with the caveat that additional, easy-to-avoid ongoing human capital investment would be necessary to make it pay off. A red flag really.

I think you're spot on regarding how the requirement should have been addressed. This was definitely mine (especially) as well as [name withheld]’s leadership error. We knew it was an "it's my turn" situation. [Our leaders] had decided to throw the forge a bone to shut up the forge director. If only I had had the foresight to ask you to do what your article says should have been done. Sigh.
Thanks again, Rodger, for your confirmation in this series.   

End Notes

1The forge was about 200 yards from the Smith Tool metalworking on Von Karmen Avenue in what used to be known as the Irvine Industrial Complex. Driving through that area today, now known as the Irvine Business Complex, it’s hard to believe there was such heavy manufacturing there into the 1980s. That part of Irvine is now mostly commercial offices and some distribution or light manufacturing facilities. The metalworking plant today is an Amazon distribution center. 

2The purpose of forging is to form the raw material into a part that has dimensions close to what is needed so that the first machining operation only needs to remove a minimum amount of metal. Forging also improves the metallurgical properties of the material. It is the same process as the ancient blacksmith employed with his hammer to make metal implements, such as horseshoes. In fact, Smith Tool began as a blacksmith shop in 1902 in Whittier, CA.

Photo Credit

Waterfall. The original uploader was PaulHoadley at English Wikipedia., CC BY-SA 2.5, via Wikimedia Commons

Saturday, August 28, 2021

What I Learned at Smith Tool about Enterprise IT

This post continues my series looking back to lessons learned in my career, which started in 1974 at Macy’s headquarters in Manhattan and continued at TRW Credit Data in California in 1976. This post takes me to the next step of my journey.

As noted in the first post, my goal is not just talk about how technology has changed. Everyone knows that. As incredible as those changes have been over my nearly half-century in the business, it is also fascinating how many things have not changed. Many of the lessons learned still apply today. That’s my focus.

Getting Restless

Although TRW was a great learning experience, I only stayed there for about 18 months. I was getting bored with accounting systems, and I was looking for something where I could continue to develop new skills. I read in Computerworld that manufacturing systems were the next big thing, so in 1978 I started another job hunt.

One of my interviews was with Smith Tool, a division of Smith International, an oil tools manufacturer in Irvine and, at the time, the third-largest employer in Orange County1. They made me an offer, and I accepted. In addition to being able to break into manufacturing systems, the fact that I might be able to somehow apply my degree in geology was also attractive2.

Lesson Learned: Take Charge of Your Career. In the 1970s, our elders commonly advised us that the best way to get ahead was to settle down in a large company and stay for decades. Although that worked for some of my peers, it never resonated with me. I never waited for opportunities to come to me. I would rather be proactive and pursue new directions. For young people, today is no different. Always be thinking about what you need to continue your career development. If your current employer can give you that, great. If not, look elsewhere.

Thrown into the Deep End

The entire IT department at Smith Tool was about 30 people, with 15 of us in application development3. The company’s systems ran on two IBM mainframes. The manager of the applications group was Rodger Beard, and he assigned me to the supervisor of manufacturing systems, Ken Ruiz.

On my first day, Ken sat me down to give me a primer on manufacturing systems. I was totally clueless. Using a white board, he explained the concept of part masters, which defined inventory items—whether finished products, intermediate assemblies, or purchased parts. He also explained product structure records, which defined the relationship between part masters to form bills of material. The capabilities to manage these relationships required a special type of IBM database, known as BOMP (Bill of Materials Processor) and the newer DBOMP (Database Organization and Maintenance Processor). He then went on to explain work centers and routings, which define manufacturing processes.

Later that day we went out to lunch. I drove, with Ken and two other co-workers in the car chatting about something called “whip.” Finally, I asked, what is “whip?” Work-in-Process (WIP) was the answer. As I said, I was clueless4.

The next day, Ken gave me my first assignment—to customize and implement the MRP module of COPICS5, which was written in IBM assembly language. This was my first encounter with any type of packaged software, although it wasn’t much more than collection of assembler source code known as an IBM Field Developed Program, which just meant that an IBM-er wrote it for some customer and now it was available to others. The IBM engineer who wrote it just happened to be assigned to Smith Tool. His name was Roly White6.

My assembler skills were a bit rusty since I had only used them sporadically at Macy’s two years prior. But within a few weeks I had made the necessary modifications and even found a bug in Roly’s code.

I absolutely loved manufacturing systems. They were so much more interesting than accounting systems. I looked forward to going up into the plant mezzanine for meetings with users. I would don my Red Wing steel toed shoes, safety glasses, and ear plugs, and take my time getting to and from the meeting so I could stand and watch row after row of multi-axis milling machines or modern CNC machines throwing metal chips on the floor7.

Not Smith Tool, but similar, and much smaller. 
My crash course in manufacturing was not limited to on-the-job training. Within a few weeks, Ken mentioned the monthly dinner meetings of Orange County APICS, the American Production and Inventory Control Society (recently renamed as the Association for Supply Chain Management, ASCM). About half of our department would attend each month, and the dinner meetings drew several hundred people. The legendary George Plossl spoke at one meeting and visited us at Smith Tool the next day, where he fielded questions. (I remember one on how to plan capacity for a heat treat operation.) I also enrolled in a four-course series of APICS certification classes at night at Cal State Fullerton, where we learned principles of inventory management, MRP, master scheduling, statistical forecasting, and other basic concepts. My favorite class was the final one, taught by Nick Testa, which tied everything together. Nick went on to become President of APICS International in 2006 and the chair of the APICS international conference8.

Within two years, I was APICS-certified at the Fellow Level, a real accomplishment for someone who only two years prior didn’t know what MRP or WIP stood for.

Lesson Learned: Build Your Industry Experience. If you are going to be an expert in business applications, you need to build your industry credentials. For IT infrastructure, this is not as critical a requirement. But when it comes to applications, most employers favor those with industry specialization. This is even more critical for consultants. If you have experience in ERP systems for manufacturing companies, for example, that doesn’t translate to ERP in charitable organizations or hospitals. In the course of my career, I gained experience and credentials in several manufacturing sub-sectors, such as medical devices, pharmaceuticals, food manufacturing, and high-tech, among others. This doesn’t mean you can’t specialize in multiple industries, but don’t try to be a jack-of-all-trades.

Moving to the Front End of the Software Development Life-Cycle

SDM-70 Methodology Schematics
As noted in my earlier post, I was formally trained in software engineering during my time at TRW, mostly around system design, development, and testing. Soon after I arrived at Smith, the department standardized on an SDLC methodology called SDM/70. It consisted of about four feet of three-ring binders, with forms and instructions for each phase and step of the development process, from initial feasibility studies all the way to go-live and ongoing maintenance. Rodger made me the department coordinator.

I understood the system design and development phases, but what really interested me was the earlier stages, like system requirements and especially business requirements. I wanted to get involved earlier and earlier in new projects, even to the point of helping decide whether a new system was even feasible, or whether there was a business case for it.

Over the next five years, I led a number of interesting projects, with several important lessons learned—both positive and negative. But the most interesting project of all was a Bit Record Database system, where I led the development of a new system to track the current and historic downhole performance of drill bits in the field. This system would become a key focus of my career direction over the next eight years. And, as I recently discovered, it was a strategic initiative for the company at the highest levels.

Lesson Learned: Pick a Focus. When it comes to enterprise IT, figure out where your interests really lie. No one can specialize in every aspect. Some of my coworkers went deep into coding, others enjoyed project management, others pursued a management path, while others, like me, wanted to get close to the business. This was another indication of where my career would be headed in the coming years, which included my leaving Smith’s IT department altogether and moving into the business itself. More on that in a future post.

How I Almost Missed This Career Opportunity

My years at Smith Tool were my greatest period of professional development at this point in my career. But I came very close to missing out. Here’s the story. My first interview was with a low-level HR representative, who I arranged to screen me during my lunch hour. I was very eager to get past her and on to the hiring manager. Unfortunately, she kept me waiting for nearly an hour. When I finally got in to see her, my annoyance was visible and she decided not to pass me on for the next interview. After two or three weeks, my recruiter went back and somehow convinced the HR group to interview me again. This time, I behaved myself and got passed on to Rodger. I got the job. But, without this second chance, my career could have been much different.

Lesson Learned: Respect People at Every Level. Everyone deserves tolerance and respect, from the front desk receptionist, to the warehouse worker, to the CEO. Moreover, you never know who has influence, and who can make or break your career. And, as my wife, Dorothy, points out, this lesson applies in all areas of life, not just the workplace.

But now I have discovered there is another angle to this story. After reviewing the first draft of this post, Rodger gave me some new perspective from the other side of the desk, so to speak. He writes that his memory around how I was hired (or nearly not hired) is somewhat different than mine. He writes:

As was often the case, here HR used a simple checklist of buzzwords they didn’t understand to screen candidates, rather than understanding the basic job requirements….HR didn’t want to and/or just didn’t know how to screen for (1) brains, (2) energy level with track record, and (3) integrity—what I was specifically demanding. HR would provide me two stacks of resumes each week. Your resume was in the “wrong” stack because of the buzzword score. But HR had not provided enough candidates, so we decided to bring you back in anyway, regardless of the scoring. You then answered my questions and clearly demonstrated you met my three fundamental requirements. I was pretty sure that we could provide the on-the-job training needed to close the application experience gap. That said, in all candor, I never expected that you would take the initiative that you did to close the gap to the extent that you did, as rapidly as you did.
Lesson Learned: When Hiring for Key Positions, Understand What Really Matters. A job posting today can bring in hundreds if not thousands of job applications. Modern recruiting systems, in response, can sort through them and prioritize them into electronic piles, like Smith’s HR group did with paper 40 years ago. Although we need some way to prioritize applicants, the process has to start by thinking through what really matters in qualifying a candidate—not just picking buzzwords. Fortunately, for me, I had Rodger sitting on the other side of the desk.

A Bump in the Road

My employment at Smith continued until 1983, when it took a short detour due to the oil crisis and a massive layoff. I had been promoted into first-level IT management by that time. Although I was not part of the layoff, the head count reductions meant I needed to be demoted back to a systems analyst and project manager role. Even worse, that Bit Record Database project I was leading was put on hold due to the downturn. This was too much for me, so I reluctantly took a job offer from Gemco, a now-defunct membership department store, to go back to my retail industry roots. But I was only there for a few months when I got the call from Smith to come back to restart the Bit Record project. I agreed, but only if I could do so as an independent consultant. This launched my consulting career. More on this in a future post.

Update: For the next lesson learned from my Smith Tool days, see: What I Learned from a Waterfall Project Failure.  


1Smith Tool had an interesting history. The firm was founded in Southern California in 1902, a time when California rivaled Texas and Pennsylvania in the oil industry. (You can still see oil field pumps in parts of Orange and Los Angeles counties.) Smith started with fish tail bits but eventually began manufacturing three-cone rock bits, which had been invented by Howard Hughes, who founded Hughes Tool. In 1975, Smith tried to get two Hughes patents invalidated but lost the case in 1986 and was ordered to pay over $200M for patent infringement. Using my bit record database, I worked behind the scenes with Smith attorneys during that litigation, and my analysis was submitted in court. I may share more about this in a future post. Smith Tool, along with the rest of Smith International, was acquired by Schlumberger in 2010, and is now known as Smith Bits.

2In addition to the career opportunities, Smith Tool was only about five miles from our home in Irvine. With new baby Steve on the way and only one car, I bought a 10-speed and was able bike to work several days a week, which did wonders for my cardio-fitness.

3When I arrived, the IT environment at Smith Tool was two IBM 3033 mainframes running IBM’s Disk Operating System (DOS). Smith was reportedly IBM’s largest customer running this lower-end mainframe OS. The only commercial software in the portfolio was MSA for payroll and HR. Everything else was custom batch systems, which had been developed by Arthur Anderson (now Accenture) several years earlier. Most of them had a similar design. I also began developing online (CICS) applications around this time.

4In writing this post, it was hard for me to understand what Rodger saw in me. It must have been that the demand for manufacturing systems developers far exceeded the supply. They had no choice but to train them. I think it must have also been my academic credentials, along with my experience with both COBOL and assembly language (which would soon be needed). They were also interested in my experience at Macy’s with IBM’s DOS operating system and my experience at TRW with IBM’s high-end MVS operating system. In reviewing an early version of this post, Rodger confirms that all of these factors came into play and that my DOS/MVS experience was particularly attractive. Eventually, Rodger assigned me as the co-project manager for the company’s DOS to MVS migration. My co-PM was young systems programmer named Wayne Meriwether. Since then, Wayne and I have had many business relationships: He has been my client, we have been business partners at Strativa, and after he left Strativa, he came back as a subcontractor. He remains a friend and close associate.

5I am having difficulty tracking the history of COPICS. Some sources indicate the product was announced by IBM in 1979. But I was working on the MRP module in 1978, near the beginning of the “MRP Revolution.” So, it might be that this field-developed program, written or customized by Roly was incorporated into COPICS. If so, my bug fix would have become part of the commercial product.

6Even as a young man, Roly White was a remarkable character. When he would show up, we all stopped work to hear what he had to share. He was a short, wiry guy, with the standard-issue IBM white dress shirt—but always with an unbuttoned top button, his tie loose, and a cigarette dangling from his fingers. He became a pillar of the Orange County APICS community, where he served for many years. He passed away in 2016, and there is a scholarship fund in his name.

7The larger milling machines could produce much greater volumes of parts, but they took several days to set up for each job. If you understand the relationship between setup time and lot size, you know what this means for inventory levels. The CNC machines could combine multiple operations in one quick setup, but they only produced one part at a time. Understanding these relationships led in a few years to the lean manufacturing movement, which followed the MRP revolution. Contrary to the belief of some, the two are not conflicting theories.

8 Over the years, Nick Testa and I have had many business relationships. I have been his student, and after I got certified we went on to be co-teachers and co-developers of APICS curricula. Later we became co-workers running two system integration practices, and later he became a subcontractor to me at Strativa. He remains a friend to this day.
Photo Credits

1. Creator: Robert Yarnall Richie. Credit: DeGolyer Library. Via PICRYL. Note: the rock bit here is not a Smith Tool bit but that of a competitor, Reed. They are nearly indistinguishable.  This one is a milled tooth bit, in contrast to the higher-end tungsten-carbide insert bits. 

2. CNC Machine Facility, Wikimedia.

3. SDM2 Method, which is similar to that of SDM/70. Wikipedia.

Thursday, July 29, 2021

Amazon and Workday Part Ways on HCM

I missed the news earlier this week that Amazon and Workday called off the implementation of Workday HCM. Apparently this is only coming to light now, even though the project was abandoned more than 18 months ago. How something this big was not leaked earlier is a mystery. 

Phil Wainewright has a thoughtful post on the subject. He writes: 

Questions remain concerning e-commerce giant Amazon's discontinuing of its wholesale deployment of Workday HCM and Payroll, which came to light this week after a report in Business Insider. Workday subsequently published a blog post confirming that the two companies had "mutually agreed to discontinue" the deployment more than a year and a half ago, which was over three years after Amazon first signed up to the deal in October 2016. The deal was announced in February 2017, shortly after Workday announced retail giant Walmart as a customer, a deployment that has since successfully gone live.

On a positive note, the project is ending without litigation. And, according to Workday's blog post, it will continue its partnership with Amazon's AWS for its cloud infrastructure, as well as its implementations with other Amazon subsidiaries, such as Audible, Twitch, and Whole Foods. 

What Happened?  

The Business Insider report, based on an anonymous source, says "the database behind Workday's software didn't scale as planned to fully support Amazon's rapidly growing workforce."

Workday disputes this.  It writes: 

This was not related to the scalability of the Workday system, as we currently support some of the world’s largest organizations, including more than 45% of the Fortune 500 and more than 70% of the top 50 Fortune 500 companies. In addition, more than 70% of our customers are live, including one of our largest customers — a retailer — across its more than 1.5 million global workers.

Workday, rather, writes that the project failure has to do with Amazon having a "unique set of needs that are different from what we're delivering for our broader customer base." It also writes, "At times...customers have a unique set of needs that are different from what we’re delivering for our broader customer base, as was the case with Amazon — one of the most unique and dynamic companies in the world."  

How Do I See It? 

All I can do here is read between the lines. 

First, I don't think the Business Insider's claim of a Workday scalability problem is credible. Workday doesn't name its large retail customer, but no doubt it is Walmart. In 2020, Amazon had about 1.3 million employees, while Walmart had about 2.3 million. So, as my late business partner used to say, there is "an existence proof" for Workday being able to scale to support enterprises with multi-million employee counts. 

Then, what about Workday's claim that Amazon had some unique requirements that are different from what Workday provides for the rest of its customers? 

This has a ring of truth to it.  Amazon is unique in many ways, and it would not be surprising if this extends to how it hires, retains, and manages its workforce. As a SaaS provider, Workday cannot afford to customize its core architecture and process to accommodate a single customer, even one as large as Amazon. It is commendable that Workday was willing to walk away from a large opportunity like this rather than compromise its core architecture. 

On a much smaller scale, Plex (a cloud manufacturing ERP provider), in its early years, used to make customer-specific customizations to its core multi-tenant code base. Later, it paid the price to move those customers off those customizations and back to its common core. To my knowledge, it is still trying to do so. Workday is not about to make that mistake. (Interestingly, Plex itself is a Workday client and partner.)

What Happens Next? 

Workday writes that it and Amazon may revisit the HCM deployment in the future. But for now the project has been discontinued. This leaves Amazon on its legacy Oracle PeopleSoft HCM system.  

This is where the plot thickens. There is no love lost between Amazon and Oracle. With its Redshift offering, Amazon looks to shift Oracle customers away to Amazon's data warehouse. Oracle, in turn, looks to compete with Amazon with its own cloud infrastructure offering. Naturally, Amazon has been working to disentangle itself from any use of Oracle products in its internal operations. Having to remain on PeopleSoft has to stick in the craw of Jeff Bezos. This might explain why the project may be revisited in the future. 

There are not many HCM options for enterprises the size of Amazon.  PeopleSoft is a legacy platform, with Oracle's HCM Cloud as its successor. But Amazon is not likely to increase its dependence on Oracle. Workday is the obvious alternative, which is why, despite the project failure, it still might be "revisited. 

So, is SAP an option?