Tuesday, December 26, 2023

Three Decades of Software Vendor Selection

Traffic sign: Fork in the road
This post continues my series on lessons learned in my career. Chronologically, we left off with my work on a major system development project at Smith Tool. By the time I finished, I had been an independent consultant for about six years. But during that time, I also began to branch out to other clients. These included travelling around the US and beyond, doing mainframe conversion training, 4GL software development, and other consulting engagements.

All these clients had one thing in common. They all came directly or indirectly by referrals from former coworkers at Smith. As it turns out, I was lucky. When a sole proprietor, like me at this time, is busy with project work, it is difficult to develop new opportunities. And when a project ends, it is not easy to develop a new opportunity on short notice. Those independent consultants I know who have been successful over many years generally have reputations as experts. They also tend to have multiple revenue streams in addition to their consulting work, such as paid speaking engagements, books, or paid media contributions. Balancing project work with business development is easier for a consulting firm with some scale, something I would learn later in my career.

But in 1989, there was still one more referral for me. A former coworker from Smith Tool, Rick McGough, had just been hired as the top IT executive for Toshiba America Medical Systems (TAMS) [1]. TAMS was the U.S. distribution and field services unit for Toshiba’s medical diagnostic imaging device business [2]. Rick was looking to replace a homegrown business management system with a commercial ERP system. TAMS already had two Deloitte consultants on the project [3], but Rick wanted “his own guy” on the project team as well. So, he brought me in. This would my first real exposure to a full-blown ERP selection project, and this was the basis for my continuing this type of work over the following three decades, evaluating and selecting all manner of software, including ERP, but CRM, HCM, supply chain, and other categories of enterprise systems.

Lesson #1: Focus on Differentiators

The Deloitte consultants had recycled a requirements document of several hundred pages from a previous client and attempted to modify it for use at Toshiba. This turned out not to be a great approach, as the document included a number of requirements that didn’t apply to TAMS, and it missed several important requirements.

One key requirement was serial number tracking, which is an FDA regulatory mandate. Each unit of equipment and even critical components within a finished assembly need to be tracked with a unique identifier. So, if Toshiba receives 10 X-ray tubes from Japan, it’s not enough just to record a receipt of 10 units. It must record the serial numbers of those 10 units and track them in inventory and through distribution to the installed customer base. But guess what. The system that the team ultimately chose (BPCS from System Software Associates, or SSA) did not have serial number tracking. It did have lot number tracking, and the reseller’s proposed solution was to use the lot number as the serial number and modify the source code to force a lot quantity of one. They also had to do a modification to the inventory receipt program to allow the data entry user to enter the individual serial numbers at time of receipt. As you can imagine, there were other modifications needed for inventory management, sales, and distribution to accommodate the “lot number as serial number modification.”

To make a long story short, this was an example of a requirement that was so important that it normally should have disqualified that vendor from consideration [4]. I now call these types of requirements “differentiators.” They are more than priority A. They are show-stoppers. As a result, BPCS did not last long at Toshiba. It was replaced a few years later by Oracle Applications (today, E-Business Suite), which could satisfy the serial number tracking requirement as well as other differentiators.

Based on this lesson learned, I made a practice of identifying five to 10 differentiators for the client. I would use that to qualify vendors for the initial short list. In developing a short list, you really don’t need an RFP with hundreds or thousands of requirements. But you do need to know these key criteria. You can specify additional requirements (though I recommend avoiding hundreds) later in the selection process. Over the years, I had other experiences where the client made a vendor selection decision without following this best practice [5].

Lesson #2: If You Must Modify, Do It with Bolt-on Modifications

Another concept I developed during this project was the difference between good modifications and bad modifications. I recall standing at a whiteboard, discussing this with the Deloitte consultants. I told them we should avoid customizing vendor source code. The reason is that it may disrupt the logical integrity of the system—it may break the system in unanticipated ways. Second, it makes it difficult to upgrade to new releases of the software, because all those customizations will need to be carried forward to the new version.

Instead of customizing the vendor’s source code, I said that we should develop what I called “bolt-on modifications.” I drew a diagram on the white board, showing the modifications as a separate program or subsystem external to the vendor’s code, and calling it by means of an API. This approach is common today in what is known as a service-oriented, modular, or composable architecture [6].

Sadly, the modification to the lot number logic I mentioned in the previous lesson could not be accomplished with a bolt-on modification. It required modification of the core source code. As a result, it took the project team six months shortly after the initial implementation to do an upgrade to the next version.

Lesson #3: ERP Can Be Hard to Justify with Tangible Benefits

An ERP implementation is a major expenditure, and as such it would need management approval in the form of a capital expenditure request, which would need to budget both the costs and the anticipated benefits. The cost side of the equation is usually easy—hardware, software, and implementation costs could all be estimated based on vendor proposals.

The benefits side, however, is more difficult, especially if you want to identify tangible benefits. The project team for improvements in inventory accuracy, order fill rates, customer service improvements, and other areas. Time and again, we would identify a possible improvement but would conclude, “it doesn’t move the needle” [7]. Later in my career, our research at Computer Economics confirmed this finding [8].

When there are tangible benefits, they are mostly in two areas:
  • Productivity savings. If the organization required 15 accounts payable personnel and the new system could cut that number to 10, that would be a productivity improvement. But few organizations really want to bank on that—they hate to lay off people. They would rather redeploy them into other roles. Or, they would rather say that the new system could allow future growth without adding to administrative headcount.
  • Discontinuation of legacy systems. If the new system is replacing an old system—and preferably several old systems, there will be savings in the hardware, software, and personnel that supported the old system. If the system was highly modified, the new system might require fewer personnel, although this is often not the case, especially in the early years.
Therefore, in selection projects over the years, I’ve concluded that ERP benefits are mostly intangible—difficult to put a dollar value on. If done right, they give management better data to support decision-making, and they provide a single source of truth. Most importantly, they provide a better platform for future systems that rely on ERP data, such as business intelligence. There are also other more strategic drivers: the old system might no longer be supported by the vendor, or it might be on a legacy hardware platform. I’ve seen more situations where this was the driving force for a system replacement than I have with a business case based on tangible benefits [9].

Lesson #4: Software Selection Is More Than Selecting Software

But the greatest lesson learned from this first selection project and over the following three decades is that software selection projects are really mis-named. They are not just about selecting software—they are about business transformation. As such, there are some key success factors:
  • Business sponsorship. Just because computers are involved does not make these projects computer projects. They should be driven by the business, with the IT organization as part of the project team. Top management should actively sponsor the project and not delegate it to middle management.
  • Strategic alignment. Many ERP selections arise from organizations changing their business model, acquiring new lines of business, or expanding into new markets. To provide context, ERP selection should start with a review of the current and future business strategy and how IT is aligned to support it.
  • Application portfolio rationalization. Many clients have dozens of enterprise systems, and many of them are inter-connected. When replacing a major system, should those other systems stay or go? Most organizations would do well to first address the entire portfolio of applications and lay out a strategic road map to understand which should be replaced or consolidated.
  • IT organizational impact. Does the current IT organization have the needed skills to support the future applications road map, including the new system? If not, what new skills are needed and how should the IT staff be organized?
  • System integrator selection. Selecting the new software is not the only decision. Most organizations will need to select an ERP implementation partner. Having the right ERP system but the wrong implementation partner often leads to failure. In fact, some of my projects over the next 30+ years involved finding a new implementation partner to replace the one that failed.
  • Client-side planning. Generally, the system integrator will have a good handle on what it needs to do. But the client’s project team must also plan for the activities required on the client side, such as training, data conversion, procedure writing, and acceptance testing. The system integrator will generally expect the client to be responsible for these activities.
  • Business process improvement. New systems can involve major changes in business processes--hopefully for the better. The selection team should also anticipate what business process improvement will be needed as part of the new system and include those in the project plan.
  • Change management. Enterprise software implementations rarely fail because of technology problems (though, it does happen). They often fail because of organizational resistance to change. Engaging the organization during new system selection is how you gain buy in for the decision, setting the stage for cooperation during the implementation.

Further Industry Specialization

I look back at my experience with Toshiba as a major steppingstone in my career. As was my habit, once again, I immersed myself in the business—in this case medical devices. I interviewed business leaders and studied the various imaging system modalities, such as X-ray, MRI, CT, and ultrasound. Short term, this enabled me to continue consulting for Toshiba on the business side after the implementation was finished. I moved into the customer service group, defining requirements for future field service systems. Longer term, my experience at Toshiba became a foundation for my later industry focus on medical devices and life sciences in general, including FDA regulatory compliance.

Photo credit: Frank Scavo.   

End Notes

[1] Rick had had been hired at Smith Tool as a programmer analyst in 1978, about the same time I was. But whereas I took a career path to get deeper into the business, eventually leaving the IT department, Rick chose an IT management path. He rose through the ranks eventually becoming the IT director. As noted in an earlier post, the 1980s were a tumultuous time for Smith, and in 1989 Rick left to take the top IT job at Toshiba. He is now retired after holding several senior IT positions at other firms.

[2] Toshiba Medical was acquired by Canon in 2016 and is now Canon Medical Systems Corp. The US headquarters is still located just a few miles from where we lived at the time.

[3] One of those two Deloitte consultants was John Olszewski. John became a good friend, and we worked together on and off for something like six to eight years for other clients. He continued with Toshiba long after my work there was completed. He eventually played a leading role in replacing BPCS with Oracle Applications. With that experience, he then went to work for Oracle, where he is now Senior Director E-Business Suite Service Product Management.

[4] There was one key requirement that tipped the scales in favor of BPCS and that was its ability to specify sales features and options in a multi-level planning bill of material. None of the other systems on our short list, which Toshiba had restricted to IBM midrange systems, could satisfy that requirement.

[5] Several years later, I had another client where the chosen vendor did not satisfy a show-stopper. Coincidentally, John Olszewski and I worked together on this project. We came in after the vendor selection to help with the implementation. Here the client was a well-known multi-level marketing (MLM) company. In this industry, customers (i.e., independent distributors) would deposit cash into their accounts against which they would place orders for their customers. The MLM company would then fulfill those orders. In other words, the independent distributors had to pay in advance. But the chosen system was a typical B2B system. It assumed the independent distributor would place an order, and then the MLM company would fulfill it, invoice the distributor, collect the payment, and apply it to the invoice. As a result, we had to modify the system to accommodate this fundamental process misalignment. In retrospect, this system should have been dropped from consideration just based on this one showstopper.

[6] The term “bolt-on modification” is in common use today, but I am convinced that I invented the term. I have searched and have not been able to find a reference to the use of this metaphor prior to 1989. Yes, it was used in the automotive industry, but I can’t find anyone using it relative to software engineering. If anyone can find it, I will be happy to give up my claim. But until then, I am claiming authorship.

[7] As in the previous note, I’m convinced that I coined the phrase, “it doesn’t move the needle.” In trying to come up with the business case, I used it as shorthand for, “Yes, that may be a benefit, but it is too small to really make a difference.” I had in mind an automobile fuel gage where adding a small amount of fuel would not be enough to make the needle move. Again, I have not been able to find use of this term as a metaphor prior to 1989. But I’m willing to see evidence otherwise.

[8] Our research at Computer Economics (now part of Avasant) over the years compared the ROI and TCO experience of a number of IT investments. We found consistently that ERP ranked dead last in economic success. We argue that this does not mean organizations should not invest in ERP, but rather that the benefits on the one hand can be difficult to quantify and that the costs need to be carefully controlled to stay within budget.

[9] What about growth in revenue? In my experience, it is difficult to claim top line benefits from ERP systems, which are mainly focused on back office processes. If a new ERP system is needed to support a new line of business, there would be top line benefits, of course, but the contribution of ERP is indirectly tied to that outcome. CRM systems on the other hand might be justified in terms of improved customer service, improved upsell/cross-sell performance and other measures that increase revenue. 

Thursday, November 02, 2023

Generative AI and the New Data-Driven Productivity Paradigm

Earlier this week, I led an Avasant panel discussion on Generative AI and the New Data-Driven Productivity Paradigm. You can watch the entire video here. 

I started with a brief introduction to set the stage, comparing today's generative AI (GenAI) services with earlier forms of AI, which date back several decades. The panel then discussed a number of important elements of generative AI: 

  • Why GenAI has gotten so much attention over the past year. 
  • Where we see GenAI delivering productivity gains as well as top line revenue growth. 
  • Data quality as a prerequisite for realizing GenAI benefits as well as issues around confidentiality and data privacy. 
  • The regulatory landscape around GenAI, even today with GDPR as well as in the future. 
  • The enterprise risks for enterprises to consider in implementing GenAI systems. 

We ended with a lightning round about practical steps for organizations to take in getting started with GenAI. 

What is Generative AI?
What is Generative AI?

Tuesday, August 29, 2023

Opportunities and Challenges with Generative AI

Although artificial intelligence originated in academic research in the 1950s, only recently has it captured the imagination of the general public. This has everything to do with the release of ChatGPT, which putg a powerful generative AI tool in the hands of individual consumers. But what are the opportunities it brings to businesses? And what are the challenges we face in using it?

I blogged about this back in February, not long after ChatGPT was released, in my post, ChatGPT for Industry Research: Not Ready for Prime Time. This was based on my early testing of the technology. Since that time, use cases by industry have started to surface, and there are many promising opportunities, just a few of which we discuss in my interview. But the risks and concerns still remain. How can we realize the opportunities, while minimizing the risks?

Read a summary of the interview on the Avasant website with the link to the full video.

Frank Scavo video interview on generative AI

Monday, August 21, 2023

A Teams Model for Effective Innovation

This post continues my series on lessons learned in my career, the ideas that influenced me, and the people who helped me along the way. This post is on the role of teams in developing and implementing innovation.

Most of my career as a consultant over the past 40+ years has involved innovation in one way or another. That’s what originally drew me into consulting. But innovation is rarely the domain of individual contributors. Innovation is a team sport. The most interesting and exciting times in my career have been when I could participate in a team focused on some sort of innovation. These experiences included developing new systems, building several consulting practices, developing new research publications, or participating as a consultant on a client’s team. 

So, it is critical to understand how team members can work together most effectively to bring an idea to reality. And this includes understanding the roles that each innovation team requires and the stages that the team passes through. 

Two Conceptual Models for Working Together

For most of the 1990s, I was a consultant for a systems integration firm in Orange County, California (no longer in business). During that time, I first managed two groups of ERP implementation consultants and then launched a management consulting practice within the firm. I also developed most of the firm’s internal training and consulting methodologies. Because the owners of the firm knew how important teaming was to our success, they brought in an outside consultant, Dr. Karol A. Bailey, to train us in two behavioral profiling tools. 

  • The first was DiSC, an assessment tool to help individuals better understand themselves and others, along with their preferred work styles. Originating in research from the 1920s, DiSC has gone through multiple iterations and refinements over the years and is still in widespread use today. It is now owned by John Wiley & Sons and is available through its authorized resellers. It is a powerful tool, and I still apply it today in my personal interactions and collaboration with others. My long-time associate Dee Long became a certified DiSC trainer and has been a great help to me in continuing to apply it over the past three decades. 
  • The second was what we knew, at the time, as the C.A.R.E. profile [1]. Although this model is synergistic with DiSC, it was developed independently. It specifically focuses on the roles that are needed in any team and the stages that an innovation should go through to successful implementation. 

The C.A.R.E model is illustrated in the schematic below, which I’ve drawn from memory and earlier training material. It recognizes that a successful team moves an idea through four distinct phases, in sequence, forming a Z-pattern.  

CARE model of teams
Click to enlarge

  1. Creators. These are the idea people, who dream of new possibilities. They often start sentences with, “Wouldn’t it be great if ___________”. 
  2. Advancers. These are those who take the idea and run with it, communicating and promoting it inside and outside the team. Through interactions with others, they test the idea to see if there is—or could be—a market for it. 
  3. Refiners. These are those who analyze the idea to find issues or problems that stand in the way of success and develop solutions resolve the problems. 
  4. Executors. These are the team members who oversee the implementation and, if appropriate, support it on an ongoing basis. 

The two roles on the top—Creator and Advancer—are focused on possibilities, what could be. They have their heads in the clouds. The roles on the bottom—Refiner and Executor—are focused on realities, what is practical. They have their feet firmly planted on the ground. The two roles on the left—Creator and Refiner—are focused on analysis. They like to work with abstract ideas. The two roles on the right—Advancer and Executor—are focused on relationships. They like to work with people.

The C.A.R.E model also recognizes a fifth profile, the Flexer. This is the least common profile. These are individuals who by nature can serve in any of the other four roles. They are like utility players in baseball, able to play any position. They are also good at facilitating the process of moving the innovation from one stage to the next in the Z-process. You don’t need to have a flexer on your team, but if you have one it can be quite valuable. 

Moving Through the Four Stages

It is important to realize that to ensure success, any innovation must pass through these four stages, and skipping over a stage will lead to failure. For example: 

  1. Jumping straight from creation to execution. Some creators are so excited about the idea that they want to implement it immediately. “Let’s just do it!” they exclaim. Organizations with this culture tend to launch many new initiatives, most of which wither like flowers without water. 
  2. Skipping the Advancer stage. This sometimes happens when the Refiners look at the new idea and immediately see problems with it. They look at the Advancers as cheerleaders, not realistic about what it will take to make the idea work. They don’t realize that someone first needs to communicate and promote the idea, to see if there really is a market for it. Without advancers, the idea suffers “paralysis by analysis.” Refiners by nature wear what Ed De Bono called the black hat (seeing the negative). First the idea needs some promotion, for team members to put on what De Bono called the yellow hat (seeing the positive).  
  3. Skipping the Refiner stage. This happens when the Advancer stage shows the idea has legs and has good possibilities. The team gets excited and wants to move straight into execution. But without analyzing the idea and resolving any issues the innovation will likely fail in execution. Few ideas are perfect in their initial conception. Some refinement is almost always needed. It is like the testing phase in software development. No system can go straight from development to production. It is important to see Refiners not as naysayers but having an important role to play in perfecting the innovation so the idea will succeed. 
  4. Not following through to execution. This happens when the team does not have many hands-on doers. It is an even greater problem when the idea is a product or service that needs ongoing support and management. Organizations like consulting firms that are mostly project-based businesses can have this problem. They are good at managing projects that have a beginning and an ending with a defined set of deliverables. But they may not have many people with skills and process-orientation to manage something day in and day out. 

Even if a person is not assessed as a Flexer, he or she may be comfortable in more than one role. Many team members will have a preferred role while also gravitating toward a second role. One common combination is Creator/Advancer—those who come up with new ideas and are also good at promoting them to others.  Another is Creator/Refiner—those who are good at conceiving new ideas and also good at perfecting them. Another is Refiner/Executor—those who refine the idea and then implement and manage it going forward. 

My Preferred Role

So, how did I test out? I am a Creator/Refiner. I love coming up with new ideas, and I am also good at analyzing them and refining them to make them better. At the same time, I may not be the best person to advance an idea. In fact, the Refiner-side of my profile means I tend to get nervous when the team rushes to promote an idea (especially if it wasn’t my idea!). As an analytical person, I tend to see the problems, the defects. So, I need to remind myself that ideas need to be promoted before they can be refined. There is a time for promoting and a time for refining. 

I am also not natural as an Executor. Of course, having owned two businesses for twenty years, I could not avoid ongoing operations. But I have always done my best when I had team members that were good at execution, with an attention to detail so I could do what I do best. Fortunately, I was blessed over many years to have had a few business associates who were excellent as Executors [2][3]. 

Although the original C.A.R.E assessment is no longer in commercial distribution, it is not difficult for individuals to figure out what roles they prefer to play. The important thing is for the entire team to understand the four roles and to move an innovation idea through these four stages. This will lead to greater appreciation for others and their unique contributions to team success. 

End Notes

[1] The C.A.R.E assessment was later rebranded as Team Dimensions, which, like DiSC, is also owned by Wiley. Although Wiley no longer markets it, it may be available in different forms through other providers.

[2] One was Barbara Newton, whom I’ve known for 30 years. She worked with me and Dee Long at that systems integration firm I mentioned earlier. She then joined my partner and I when we launched the consulting firm Strativa in 2000 and later acquired Computer Economics in 2005. She was responsible for all of the research publication processes as well as client services. She stayed on through our acquisition by Avasant in 2020 and retired in 2021. 

[3] Another example is Sherry Maples, who joined us in 2001 and stayed on for nearly 20 years. She ran the accounting function and, with Barbara, managed the back-office processes for the two companies, freeing me to focus on consulting and research. She retired in 2019. She was incredibly detailed oriented, which is exactly what I needed in those years. 

Tuesday, February 14, 2023

ChatGPT for Industry Research: Not Ready for Prime Time

ChatGPT: Four Errors in a Single Sentence
Since its unveiling in November 2022, there has been a lot of buzz around ChatGPT—OpenAI’s generative AI chatbot. Although there are similar projects, such as Google’s Bard, ChatGPT got an enormous boost as it was free, easy to use, and could be deployed in a variety of ways—from answering questions to writing essays, poems, and song lyrics, to coming up with recipes, to writing software code, based on simple prompts from the user. 

One inevitable question of course was, to what extent would ChatGPT threaten jobs?

For me, it’s an important question. I direct a team of IT and industry research analysts at Avasant. Is ChatGPT a disruptive threat to our business? Would competitors, with less-skilled workers, be able to generate research as good or in greater quantities than what we could produce? Would clients even need research firms like ours at all, if they could generate their own reports, using these tools? Or, can we leverage a tool like ChatGPT to make us more productive? 

Putting ChatGPT to the Test

To find out, I did a series of tests to see if ChatGPT could write a short research note. 

One of my tests was to ask ChatGPT to write about something I am intimately familiar with: The history of the IT research firm Computer Economics, which my partner and I acquired in 2005 and I sold to Avasant in 2020. (I still manage this business as part of Avasant Research.) 

Here is the transcript of my prompts and the responses from ChatGPT, lightly edited for length, along with my evaluation of ChatGPT. 

Prompt: “Provide a history of the IT research firm Computer Economics.”

ChatGPT responded: 

Computer Economics is a research and consulting firm specializing in the field of information technology (IT). The company was founded in 1982 in Newport Beach, California, by John L_________, who currently serves as the firm's President and CEO. 

There are four factual errors in the second sentence of this opening paragraph! Computer Economics was founded in 1979, not 1982, as documented in this Wikipedia entry. (Wikipedia is a primary source for ChatGPT's training data, so how did it miss this?) Moreover, the firm was founded in New York City, not Newport Beach, by Bruno Bassi, not John L., and John L. was never the company president. He was our VP of Research, from 2007 until he retired in 2017. ChatGPT just made this all up. 

But wait, there's more! A couple of paragraphs later, ChatGPT writes: 

Computer Economics … has received several awards for its work, including recognition from Forrester Research as a “Leading IT Research and Advisory Firm” and from Gartner as a “Top IT Research and Consulting Firm.” 

But neither Forrester nor Gartner has any such award program. In fact, it would be hilarious if they did since they are both IT research and consulting/advisory firms themselves! ChatGPT just made up that part. So, I gave ChatGPT another prompt and a chance to correct itself. 

Prompt: “Please provide more information about Gartner's award program for Top IT Research and Consulting firms. When was this program established and who were some of the most recent winners of this award?” 

ChatGPT responded (click to enlarge): 

ChatGPT Hallucinations
ChatGPT Hallucinations

Apparently, ChatGPT is not aware of the First Law of Holes: When you find yourself in one, stop digging. 

My prompt asked who some recent award winners were. Now it says the winners are not publicly available. What kind of award keeps the winners secret? Moreover, if the winners are secret, how does it know Computer Economics was one of them? At the same time, the winners must not be secret, because they “can be found in Gartner’s annual report on the market for IT research and consulting services” (which, of course, does not exist).

Risks in the Use of ChatGPT for Research

In summary, here are some observations on the risks of using ChatGPT as a virtual research analyst.  

  1. Fiction parading as fact. As shown above, ChatGPT is prone to simply make up stuff. When it does, it declares it with confidence—what some have called hallucinations. Whatever savings a research firm might gain in analyst productivity it might lose in fact-checking since you can’t trust anything it says. If ChatGPT says the sun rises in the east, you might want to go outside tomorrow morning to double-check it.  
  2. Lack of citations. Fiction parading as fact might not be so bad if ChatGPT would cite its sources, but it refuses to say where it got its information, even when asked to do so. In AI terms, it violates the four principles of explainability
  3. Risk of plagiarism. Lack of citations means you can never be sure if ChatGPT is committing plagiarism. It never uses direct quotes, so it most likely is paraphrasing from one or multiple sources. But this can be difficult to spot. More concerning, it might be copying an original idea or insight from some other author, opening the door to the misappropriation of copyrighted material. 

Possible Limited Uses for ChatGPT

We are still in the early days of generative AI, and it will no doubt get better in the coming years. So, perhaps there may be some limited uses for ChatGPT in writing research. Here are two ideas. 

The first use might be simply to help overcome writer’s block. We all know what it’s like to start with a blank sheet of paper. ChatGPT might be able to offer a starting point for a blog post or research note, especially for the introduction, which the analyst could then refine. 

An additional use case might be to use ChatGPT to help come up with a structure for a research note. To test this, I thought about writing a blog post on the recent layoffs in the tech industry. I had some ideas on what to write but wanted to see if ChatGPT could come up with a coherent structure. So, I gave it a list of tech companies that had recently announced layoffs. Then I gave it some additional prompts: 

  • What do these companies have in common? Or are the reasons for the layoffs different for some of them? 
  • As a counterpoint, include some examples of tech companies that are hiring.
  • Talk about how these layoffs go against the concept of a company being a family. Families do not lay off family members when times are tight. 
  • Point out that many employees in the tech industry have never experienced a downturn and this is something that they are not used to dealing with.

The result was not bad. With a little editing, rearranging, and rewriting it could make a passable piece of news analysis. As noted earlier, however, the results would need to be carefully fact-checked, and citations might need to be added. 

One word of warning, however: In order to learn, young writers need to struggle a little, whether it is by having to stare at a blank sheet of paper or constructing a narrative. I am concerned that the overuse of tools like ChatGPT could deny junior analysts the experience they need to learn to write and think for themselves. 

The larger lesson here is that you can’t just ask ChatGPT to come up with a research note on its own. You must have an idea and a point of view and give ChatGPT something to work with. In other words, treat ChatGPT as a research assistant. You still need to be the analyst, and you need to make the work product your own. 

I will be experimenting more with ChatGPT in the near future. Hopefully, improvements in the tool will mitigate the problems and risks.


Update Feb. 20, 2023: Jon Reed has posted two lengthy comments on this post with good feedback. Check them out below in the comments section. 

Sunday, October 09, 2022

What If You Held a Metaverse Party and Nobody Came?

The metaverse just might be the next big thing, but according to two reports this week, that time is not yet. 

The first story is from CoinDesk, which reports that the two leading decentralized metaverse platforms--Decentraland and The Sandbox average below 1,000 daily users. Yet each is a unicorn, with over $1 billion in valuation. 

What’s going on in the metaverse these days, you might ask. Looking at two of the biggest companies with over $1 billion valuations, the answer is surprising: Not much, or at least not enough to bring users back every day. According to data from DappRadar, the Ethereum-based virtual world Decentraland had 38 active users in the past 24 hours, while competitor The Sandbox boasted 522 active users in that same time.

An active user, according to DappRadar, is defined as a unique wallet address' interaction with the platform’s smart contract.

This matches my own observation a few weeks ago when I created an account on Decentraland. Apart from the clunky graphics, the thing that struck me was, there's no one here! Until I read the CoinDesk report, I thought maybe I was doing it wrong. But apparently not.  

So, maybe the centralized metaverse platforms, such as the Meta (formerly Facebook) Horizon Worlds platform, is where the action is.  Apparently not. According to this report on The Verge, the user experience on Horizon Worlds is so bad that management under Mark Zuckerberg has to encourage, cajole, and beg even its metaverse developers to use it.   

In a follow-up memo dated September 30th, Shah said that employees still weren’t using Horizon enough, writing that a plan was being made to “hold managers accountable” for having their teams use Horizon at least once a week. “Everyone in this organization should make it their mission to fall in love with Horizon Worlds. You can’t do that without using it. Get in there. Organize times to do it with your colleagues or friends, in both internal builds but also the public build so you can interact with our community.”

On the other hand, we are already seeing real value in some early metaverse business applications. Two weeks ago, I co-moderated a metaverse panel discussion at Innovate@UCLA. One of the panelists, Chris Mattmann, Chief Technology and Innovation Officer at Jet Propulsion Laboratory described how JPL is already using metaverse-like digital worlds to great success for employee onboarding, virtual tours, and virtual meetings.  

Early adopters, like JPL, give an indication of where the value may lie. But for now, as far as public metaverse platforms go, it appears we are close to or at the peak of the hype cycle. 

On the third hand, I’ve been wrong before. As I wrote earlier this year: Predictions are hard, especially about the future.


Image Credit: Decentraland, via CoinDesk. 

Sunday, August 07, 2022

An Innovator’s Story: Creating a Business for Lasting Success

Back in May, I had the opportunity to do an on-stage interview with Jamie Siminoff, founder and CEO of Ring, as part of Avasant's Empowering Beyond Summit

Ring, the first provider of video doorbells, is an interesting case study in innovation. Siminoff founded the firm in 2013, and, despite walking away from an episode of Shark Tank with no money, grew it to disrupt the home security industry.

Siminoff eventually sold Ring to Amazon in 2018 for over $1 billion. Now, under Amazon’s ownership, he continues to manage Ring, which has grown to be the largest home security camera brand in the world.

Over on the Avasant website, I put together a summary of  Siminoff’s keynote and my on-stage interview around two broad themes:

  • Lessons learned in innovation, based on Ring’s invention. 
  • How to ensure success when an innovative startup is acquired by a much larger enterprise.

The research byte concludes with Siminoff’s view on how business leaders in traditional organizations can apply the lessons in innovation.

Read the research byte on the Avasant website: An Innovator’s Story: Creating a Business for Lasting Success


Sunday, May 15, 2022

Predictions Are Hard, especially about the Future

Gemco Membership Card
With nearly half a century in enterprise IT, I have had plenty of time to see how technology predictions over the years have been fulfilled—or not fulfilled. This was brought home to me recently while reviewing an old project document.

But first, some context. As noted in my previous post, I felt forced by a business downturn in 1983 to resign from Smith Tool and take an IT manager position at Gemco, a now defunct membership department store, then owned by Lucky Stores. This returned me to my retail roots.

A Prescient Prediction

Although I only stayed at Gemco a few months, I was put in charge of a strategic systems project: To define the requirements for a new merchandising system. We started by interviewing the senior leaders of the firm and worked our way up the organization until we reached the final interview with the CEO, Peter Harris [1].

The interview summary, dated October 18, 1983, is quite interesting, especially in one paragraph where Harris said:
We need to recognize the changes that will come in the next decade due to the spread of advanced telecommunications. It is likely that 50% to 70% of basic hardgoods and commodities will be purchased from home, eliminating the need for store visits. However, apparel and other fashion merchandise will continue to be purchased in store environments, because of the psychological need to “go shopping.”
Today, I do not recall anyone in the retail industry in the early 1980s predicting the dawn of B2C e-commerce. And apparently, even 10 years later, I was still a skeptic. In the margin of that final report, there appears a note, in my own handwriting.
How wrong he turned out to be! –FS, 3/15/93 (10 years later!)
Peter Harris Interview Quote

But little did I know, 1993 was the year that the U.S Congress passed a law to commercialize the Internet, and it was also the year that Tim Berners-Lee invented the World Wide Web. And, one year later, Jeff Bezos founded Amazon. But it took another two decades before a worldwide pandemic pushed B2C e-commerce for certain categories of goods to the levels that Peter Harris predicted nearly 40 years earlier.

So, no, Harris’s prediction was not wrong. He was just off by about 30 years.

Lesson Learned: Keep an Open Mind

As Yogi Berra once said, predictions are hard, especially about the future. Like many others, I tend to be a skeptic, always looking for the negative side of an idea, or what could go wrong. In fact, a few years ago, I wrote a blog post mocking fellow analysts who make year-end predictions. I don't like to make predictions myself and I tend to be skeptical of those who do make them. I have to make a conscious effort to fight this tendency.

So what predictions are out there that might seem far-fetched today but could eventually come into realization?
  • The Metaverse. There are many breathless predictions these days about “the metaverse,” a virtual world where people and organizations can live and interact in a persistent and immersive 3D environment, where they can own virtual property, trade virtual goods, and be educated or entertained. Some argue that the metaverse already exists with various gaming platforms. Others think it is being overhyped by social media companies, such as Facebook (now branded as Meta) that are otherwise out of ideas about how to keep people engaged on their platforms in order to target them for advertisements.
  • Non-Fungible Tokens. NFTs have been a hot market over the past year, with sales of digital art, secured by NFTs on a blockchain, trading for thousands or millions of dollars. The fact that any piece of digital art can be saved with a mouse right-click makes it difficult to understand what exactly an NFT denotes in terms of ownership. The recent and rapid decline in the value of various NFTs confirms to skeptics that they are nothing more than the 21st century equivalent of Tulipmania.
  • Cryptocurrencies. Digital currencies using cryptography, such as Bitcoin, are built using blockchain technology. In contrast to fiat money, such as the US Dollar, they are not backed by a central government but are decentralized, permissionless, and virtually impossible to corrupt. Advocates predict they will replace fiat money, or at least exist alongside it, providing a hedge against inflation and very low transaction costs compared to traditional currency exchanges. At this writing, there is a collapse in cryptocurrency markets, confirming the view of crypto-critics that the whole thing is one big bubble.
It is easy to be a critic, or as Ed Debono taught, to put on the black hat. It is not so easy to see the problems with an idea while at the same time seeing where there could be value. It is even more difficult to predict when exactly that value might be realized.

Sometimes, predictions are not wrong. They just take longer than we think to be realized.

Footnote

[1] Peter Harris is an interesting person, starting as a stocking clerk at Gemco and eventually working his way up to President from 1980 to 1984, when the firm achieved $2.2 billion in revenues. Later, he and his partner acquired FAO Shwarz, where he took over as CEO until 1992. Later, he became the President and CEO of the San Francisco 49ers (2000-2004), and held several other leadership positions after that. Today, he is retired and serves on several boards, including Palo Alto Medical Foundation. He is still on LinkedIn.

Update, May 22, 2022 

One of the joys in writing this series of career posts is reconnecting with people I worked with decades ago.  So, I sent a message to Peter Harris on LinkedIn.
Peter, I'm sure you don't remember me, but I interviewed you in 1983 at Gemco. I just wrote a blog post about your prediction about E-commerce. [Link to this post.] Let me know any feedback. --Frank
This morning he wrote back:

Frank, I am absolutely blown away to hear from you and read of your perspective, highlighted of course by your absolutely amazing record keeping mention of something I said many years ago.  While I think 30 years early doesn't count as anything beyond being impracticably thoughtful, I was honored and  hugely appreciative to be recognized.  Your article is fascinating and I am now following you so that I might observe and learn from your thinking and musings on other topics.  That you have tracked me down on LinkedIn and shared it means a lot.  The appropriate comments are "way cool," "awesome" or maybe even "wowza."  Thank you so very much.   I'd be interested to hear a bit more than is visible on LinkedIn about what you are doing now if you have time to share. --Peter

[Posted with Peter's permission.]

Update, Aug. 8, 2022

The same year, 1983, Mark Dertouzos made this incredible prediction of the World Wide Web. Click to watch. 
Mark Dertouzos video thumbnail

Photo Credits:

Wednesday, April 20, 2022

The Most Significant System Development Project of My Career

Drill rig
This post continues my series on lessons learned in my nearly half century in enterprise IT. We started in 1974 with my job at Macy’s headquarters in Manhattan, followed by my move to California in 1976 and my job at TRW Credit Data. I then took a job at Smith Tool in 1978, where I got thrown into the deep end with manufacturing systems. This led to several more important lessons learned, including the failure of a waterfall development project and my first encounter with shadow IT.

But there were more lessons to be learned at Smith Tool. 

Next would be the biggest and most important project of my career. Rolling off a series of manufacturing system development projects, I was now assigned to a task force to develop a strategic system to analyze the performance of Smith’s drill bits in the field. I would be the project manager for a small team of developers and the overall system architect. 

An Unspoken Objective

The first phase was to build a bit record database, which would become the foundation for several future systems. The database, which would ultimately contain millions of historical drilling records from around the world, would be used for preparing well proposals, evaluating product performance, conducting competitive analysis, and providing a feedback loop from the field to engineering to improve product quality. 

But there was another, unstated objective. Smith Tool had been sued for patent infringement by Hughes Tool (the business that made Howard Hughes his initial fortune). The patent was for a novel application of an O-ring, which sealed the lubricated bearing of the three roller cones from the harsh downhole conditions. O-rings (made famous for their failure in the space shuttle Challenger disaster) were in common use at the time, but Hughes had discovered that if you squeezed the O-ring a bit it actually extended the life of the seal. This was counter-intuitive, but it worked. The litigation had been dragging on for over a decade, starting with Smith getting a federal court in 1979 to invalidate the patent, and Hughes getting a federal appeals court in 1982 to reverse that decision. That was just before I was assigned to the development project, which would be an important element in Smith’s defense. 

The lawsuit, for about $1 billion, was at that time the largest patent infringement case in history. The lawsuit alleged that Smith’s use of the Hughes patent made Smith’s bits competitive with Hughes, earning Smith profits that it would otherwise not have earned. To defend against the Hughes claim, Smith would need a system to provide the data analysis. 

None of this was mentioned to me at the time. I only knew that the project was getting me a lot of attention from top management. In fact, my old manager, Rodger Beard, recently told me that at corporate headquarters they were talking about how my system would “save their bacon.” 

Lesson Learned: Immerse Yourself in the Business

Shortly after the project kick-off, I learned that there was a week-long training program about to start for new field sales people. I invited myself in and got to sit through detailed lessons on Smith’s products and how they were used by customers. I found the whole week fascinating. [1]

Halfway through the week, Dan Burtt, the IT director, noticed I was not at my desk and found out about the class. “Why is Frank taking sales training?” he asked. I managed to convince him to let me finish. 

Since I had been developing or maintaining many of Smith’s manufacturing systems, I already understood the engineering and production data that would be needed to correlate with field performance. What I lacked was an understanding of that field data. My degree in Geology helped, but all of this was mostly new information. 

There were also some thorny design problems, such as how to designate well locations in different parts of the world, using different coding schemes. I spent several hours at the UC Irvine library learning about various geographic location systems in use in the U.S. and around the world, such as the section-township-range system, originally proposed by Thomas Jefferson

In any new system development project, you have to start with a deep understanding of the business. It is not enough to have users tell you what they need. It’s more than gathering requirements. You have to have a sense of curiosity and immerse yourself in the industry and the business.

Lesson Learned: Take Advantage of Career Adversity

But the oil industry is notorious for booms and busts, and we were heading into a major bust. There was a massive company layoff, and the IT staff was not excluded. With fewer IT personnel, we didn’t need as many first level managers, so I was demoted back to project manager. Even worse, after I finished the requirements definition, my project was put on hold pending budget approval to move forward. This was the last straw. I resigned in August 1983 and returned to my retail industry roots, taking an IT manager position at Gemco, a now-defunct membership department store.

Beta Management Systems Logo

But, after a few months I got a call from Smith. The bit record project had been funded. Could I come back to lead it? I said yes, on one condition: I wanted to come back as a consultant, not an employee. I had been thinking for some time about a consulting career, and this was my opportunity. Smith agreed—I had so much knowledge of the project and the business requirements that it seemed like a small request. 

This launched my consulting career, as a sole proprietor doing business as Beta Management Systems. [2] [3]

Development, Implementation, and a Move into the Business

Now I was back at Smith, leading a small team of developers. I designed the system mostly as an online system (IBM’s CICS) but with a little batch programming to extract manufacturing and engineering data on a nightly basis. As usual, I wrote some of the most important code myself. The database was eventually going to hold millions of records, and it would be used for online analytical processing (OLAP), so it needed to be fast. I designed the database in IBM’s VSAM, and I set up alternate indices to provide quick access for the most common types of standard reporting. This was before the days of widespread use of relational databases, or at least before Smith had one. 

For the OLAP reporting, I used something new. The year before I had gotten trained in FOCUS, a fourth-generation language from Information Builders (acquired by TIBCO in 2021). This was an excellent tool for reporting and analysis, especially for ad-hoc inquires. This is how I would develop the OLAP reporting that would prove instrumental later on in supporting the patent litigation. 

Initial system development took less than a year. I still have a copy of the system user guide, dated November, 1984. Users began loading bit records in 1985. 

As soon as the system went into production, there was no more need for me in the IT department. But there was a huge need in the engineering department, where all that ad-hoc analysis would need to be done against the database. So, I left IT and went down the street to the “Hobie Cat Building” (the former owner) to begin as a consultant in the engineering group known as Product Evaluation. [4] 

Within Product Evaluation, I became part of a small team to develop the OLAP reporting for the bit record system. Looking back, this was the best experience I’ve ever had in a team. There was our manager, Jim Watson, who was a metallurgist by training and product failure analyst. Jim became a personal friend of mine over the years. There was Steve Steinke, a geologist, who provided the knowledge of the oil field. Rounding out the team was Joel Palmer, a statistician, who ensured that our analysis was statistically valid. Then there was me, the systems guy. 

Lesson Learned: Understand Basic Statistics

Textbook cover--Calculate Basic Statistics
Looking back, I now appreciate how the statistical validity of our analysis would be critical. This was important not only because we needed to ensure that the conclusions of our analysis were on a sound footing generally, but also because some of our analysis would be presented in court in Smith’s legal defense. 

I had started out as a math major at UPenn, but I’d never had a course in statistics. So, even though we had a statistician on our team, Smith brought in Dr. Mark Finkelstein, a mathematics professor from UC Irvine, to coach us once a week on basic statistics. He used his own text book, pictured nearby. We learned about descriptive statistics and inferential statistics, regression, correlation, and confidence intervals. 

The key point I learned was this: Just because a data set appears to show a correlation between two variables, it might not be statistically significant. For example, I might be asked to divide a sample of bit runs from a group of nearby wells into three groups according to some engineering parameter. My analysis might show that as the parameter increases, the bit performance improves. But that conclusion might be spurious. On more than one occasion I had to tell the requestor that, even though a graph might appear to support his theory, the statistics did not. 

Eventually, the Smith lawyers asked me to perform statistical analysis in support of the patent litigation. In response to the court ruling that Smith was infringing on the Hughes patent, Smith had redesigned its bits to use an older seal, called a Belleville seal, instead of the O-ring. Smith contended in court that the new seal provided performance equal to that of the O-ring, and my analysis supported that conclusion. But the new seal was more expensive than an O-ring, increasing the cost of a tricone bit by about $29. According to a Los Angeles Times account of the trial: 

According to [Judge] Hupp’s chronology of the events that led to Smith’s using Hughes’ patented device, Smith stopped manufacturing the Belleville-type seals in 1972, in part because they made the Smith device cost about $29.02, or an estimated 3.2% of the total purchase price, more than the competing Hughes product.

Smith’s attorneys argued, therefore, that the damages to be awarded Hughes should be calculated based on the difference in product cost for the half million infringing bits, or about $14.5 million, rather than the billion-plus that Hughes was claiming. 

Bottom line, as I was told: The judge agreed that the performance of the Belleville seal was equal to that of the O-ring but did not agree that damages should be based on the difference in cost. The judge assigned damages of just over $200 million. In other words, we won the battle that I was fighting, but lost the larger war. [5]

My appreciation of statistics would benefit me later in my career, when Dan Husiak and I acquired the IT research firm Computer Economics. I took over the research group, which collected and published metrics on IT spending and staffing. Many times, I was confronted with what appeared to be a correlation between IT spending and some other metric. My experience from Smith Tool taught me to be skeptical if the sample size was small. 

Postscript: Successor System Still Delivering Value

DRS Drilling Record System log in panel
The combination of the court judgment, a continuing downturn in the oil industry, and some poor business decisions was too much for Smith to overcome. The company filed for Chapter 11 bankruptcy protection, divested noncore businesses, and was able to come out of bankruptcy in the same year. I was still working as a contractor to Smith through this entire time, but at less than a full-time basis. This gave me time to develop business with other clients. 

So, what happened to the Bit Record Database? In 1988, while I winding down my work on the system, Steve and Jim delivered a presentation at the IADC/SPE Drilling Conference. They reported that the system contained 100,000 bit records. They also reported that the team had built an interface from the mainframe to PCs running dBase in field offices. This was how they were preparing bit programs for new wells. 

Then, in the mid-1990s, I got in touch again with Steve, who told me that Smith had migrated the system from the IBM mainframe to a personal computer running the Progress database. 

So, in writing this post, I got curious: Where is the Bit Record System today? Smith was acquired by Schlumberger in 2010, who rebranded the Smith Tool business as Smith Bits. A little digging uncovered a recent edition of the Smith Bits product catalog, and it has an interesting page on something called the “DRS drilling record system.” 

The Smith Bits DRS drilling record system is a collection of nearly 3 million bit runs from virtually every oil and gas field in the world. The database was initiated in May 1985, and since that time, records have been continuously added for oil, gas, and geothermal wells. With this detailed data and the capabilities of the IDEAS platform, Smith Bits engineers can simulate bit performance and make changes to their bit designs to optimize performance in a specific application. [Emphasis added]

With that date of May 1985, I have no doubt that this is the successor to the Bit Record Database. It is interesting that Schlumberger has renamed the system as the Drilling Record System. It may be because even in my original design the system included data on bottom hole assembly tools other than rock bits and other drilling data such as hydraulics. We called it the Bit Record Database because the form that the system was based on was commonly called a bit record. A DRS screen shot is shown below (click to enlarge). 

DRS Drilling Record System screen shot

Update, Aug. 13, 2022. I have now reconnected with my old teammate, Steve Steinke, who retired two years ago from Schlumberger's Smith Bits group. Steve worked with the DRS system over all those years since we were together. Steve confirmed my recollection of our discussion in the early 1990s that Smith converted the system to a single PC running the Progress database. The main motivation for this was to get off the mainframe. Then around 1999, Smith rewrote the system on an Oracle platform. At the same time, they greatly expanded its functionality to include records of other downhole tools besides rock bits. The team continued to expand the system to include records of other drilling equipment and systems as well. It now even includes geological data, such as formations encountered at various depths. Today it contains something like 1.5 million wells and is used by other Schlumberger business units in addition to Smith Bits. 

In an interesting side note, Steve confirms that the worldwide geographic location coding system I developed is still part of the system design. But Steve personally enhanced the design to automatically derive latitude-longitude from section-township-range, to more easily identify offset wells. 

In any event, I am proud that the system development work I did in the 1980s, over a period of about eight years, still continues to deliver value today. 


Footnotes

[1] The training sessions were not all technical. There were lessons on how to behave properly in the field, including advice such as, when driving through a gate on a cattle ranch, be sure to close the gate behind you.  Another lesson told us not to beg for business or claim that you’ll get fired if you don’t make the sale—unless that’s the only way to close the deal. There was another lesson with a pamphlet entitled, “How to Turn WAGs into SWAGs,” where a SWAG is a scientific WAG. It had something to do with using data in sales proposals. We also learned that in the early days, Smith was known as the “Whisky Bit,” because sales people would put a bottle of whisky in the pin of the bit. So, when the roughnecks would get thirsty, they’d say, “Let’s open one of them whisky bits.” 

[2] There was no significance to the word Beta. I didn’t have money to spend on a logo, so I figured I could get the printer to use the Greek letter beta in place of the normal font. That allowed me to use the business name as a logo. 

[3] Having at least a year of guaranteed contract work, maybe more, was a huge factor allowing me to break into consulting. A year earlier, our third child, Joanna, was born, and we had just bought our first home. Finances were tight. As it turned out, though, my work with Smith took me through most of the 1980s as I then began to add other clients, mostly through referrals from other “Smithereens” (people who had quit or left Smith during the rounds of layoffs). 

[4] Among other responsibilities, the Product Evaluation group provided post-mortem analysis of bits that failed in the field. They had a large room that they called the “morgue,” with bits that had failed, laid out in table top trays. The group included metallurgists and engineers that did root cause analysis to determine the causes of failures and make recommendations for changes in product design, manufacturing processes, and quality procedures.  

[5] This was a stressful time, with the Smith legal team often asking for additional ad-hoc analysis, sometimes just as I was about to leave for the day. But, to their credit, they did a good job keeping my name out of discovery so I wouldn’t have to be deposed. I think it helped that I was a contractor and not a Smith Tool employee. Not that we had anything to hide. But it wouldn’t have been a pleasant experience. Jim and Steve were deposed and testified in court. I got to see a trial transcript, and from what I read and what they told me, it was grueling. 

Photo Credit: Drill Rig, Pixabay


Friday, December 24, 2021

Cerner Acquisition to Launch Oracle Higher into Healthcare

Oracle Logo and Cerner Logo with medical doctor using a touch screen
Earlier this month, Oracle and Cerner jointly announced an agreement for Oracle to acquire Cerner, a provider of digital systems to healthcare providers. The deal of approximately $28 billion will be the largest in Oracle’s history, nearly three times the size of its PeopleSoft acquisition in 2005.

To understand the rationale behind the deal and what it means for the two companies, the industry, and especially for Cerner customers, we interviewed Avasant partners, consultants, and fellows who focus on the healthcare industry.  This research byte summarizes our point of view.

Read this post on the Avasant website: Cerner Acquisition to Launch Oracle Higher into Healthcare