Wednesday, September 18, 2019

IFS and Acumatica Living Together in the ERP Space

Photo Credit: Jon Reed, Diginomica
When does the relationship between two tech vendors look like a merger but is not actually a merger?

For all intents and purposes, that’s exactly what just happened with two players in the enterprise resource planning (ERP) industry. EQT Partners, a global private equity fund, recently finalized a deal to buy Acumatica. Although EQT already owns another ERP firm, Swedish-based Industrial and Financial Systems AB (IFS), it is not merging them. Rather, the two firms will work closely together, while remaining separate entities under the same EQT holding company. EQT’s Jonas Persson will serve as chairman of both companies, and IFS CEO Darren Roos (pictured above, on the right) will assume a seat on Acumatica’s board.

IFS may not have the name recognition of SAP, Oracle, or Microsoft, but at about 10,000 customers worldwide, IFS is much larger than Acumatica. It counts in its arsenal corporate giants such as Toyota, BMW, Pepsi, John Deere, and the largest container ship company in the world, Maersk.

It’s not that a merger was not on the table. EQT’s acquisition came after IFS considered buying Acumatica outright, Roos said recently at an analyst event. After carefully considering the situation, EQT decided that IFS and Acumatica are different enough that a different strategy was needed, to keep the two firms separate.

For its part, Seattle-based Acumatica has grown to more than 5,200 mostly small and midsize customers in 11 years. It is known for packaging its products into industry “editions.” Each edition marries Acumatica’s horizontal functionality (primarily financials, distribution, and customer management) with industry-specific modules, such as commerce, construction, manufacturing, field service, and distribution. Acumatica sells these editions through a network of value-added resellers (VARs).

Read the rest of this post on the Strativa blog:
IFS and Acumatica Living Together in the ERP Space

Wednesday, August 28, 2019

The Use and Misuse of PaaS

One of the key advantages of modern cloud systems is that they often come with rapid development platforms that allow the vendor, partners, and even customers to build extensions and customizations to the system without affecting the underlying code or architecture of the base system. These are generally known as Platform as a Service (PaaS).

Examples include the Salesforce Lightning (formerly platform, the SuiteCloud platform of Oracle’s NetSuite, Acumatica’s xRP platform, Sage Intacct’s Platform Services, Microsoft’s Power Platform, and many others.

However, as with so many good things in life, PaaS can be used and abused.

Read the rest of this post on the Strativa blog:
The Use and Misuse of Platform as a Service 

Wednesday, July 17, 2019

The Benefits of Business Process Framing

In selecting and implementing a new enterprise system, business leaders have learned the importance of evaluating business processes. “Let’s not make this an IT project,” they say. “Let’s really understand our current business and our vision for the future.” Without a doubt, this is right, and we encourage our clients to do exactly that.

However, business leaders often think that this means they should begin with detailed process mapping of their existing processes. “Let’s have someone come in and map all our business processes,” they say.

At first glance, this seems logical. If we want to define our business requirements, what better way than to map our “as-is” processes?

Why is this not a good idea? There are at least three reasons.

Read the rest of this post on the Strativa blog: The Benefits of Business Process Framing

Friday, June 28, 2019

Time for a Declaration of Independence from Software Vendors?

When it comes to enterprise IT, every so often we begin to notice things that cause us to question our basic assumptions. The latest is about the role of commercial software.

The traditional advice for companies is that it is best to standardize on a commercial software vendor for the core of the applications portfolio. It might be a major vendor, such as SAP, Oracle, or Microsoft, or it might be any number of other providers. Custom software should be the exception, not the rule, whether for unique industry requirements, or for modifications and extensions to the core system. The more you can rely on a commercial software vendor, the better.

We’ve been giving this guidance for decades, whether for on-premises systems or with cloud-based systems.

Nevertheless, some of our clients are starting to rebel against the conventional wisdom by developing more of their own software in-house. Moreover, they are not doing it just on an occasional or exception basis or for niche applications. They are doing it for domains where we traditionally assumed that commercial software was the natural choice.

Read the rest of this post on the Strativa blog: Time for a Declaration of Independence from Software Vendors?

Wednesday, June 19, 2019

What I Learned at Macy's about Enterprise IT

As incredible as the changes have been in information technology over the decades, what is also fascinating is the ways in which things have not changed. As I am approaching the half-century mark in my career, I thought it would be helpful to look back to understand the lessons learned that still apply today.

As you can imagine, this will be a much more personal post than usual. And be sure to check out the footnotes, which include interesting but tangential details about work life during that time.  

My career began at R.H. Macy in 1974, at its headquarters and flagship store at Herald Square in Manhattan, made famous in part by the film Miracle on 34th Street. Just as in the movie, the store was directly across the street from Macy’s greatest rival at the time, Gimbel's. And, even in 1974, much of the building looked the same as it did in the 1947 movie.

Hired with No Experience

I did have some programming courses at the University of Pennsylvania, which I took after I realized that my geology major probably meant a career in the oil patch or in mining regions of the world—not places where I particularly cared to live. So, I thought, how about computer programming? As an undergraduate, I had one course in programming at Penn’s Moore School, home of the ENIAC, one of the first digital computers. In that course, I learned some Fortran and ALGOL, both now largely forgotten. The next year, I took a graduate course in IBM assembly language. I also did some Fortran programming for a few months for a professor in environmental science at nearby Drexel University.

This was the extent of my programming experience by the time my newlywed wife, Dorothy, and I moved to New York City. She took a job as a medical transcriptionist at New York University Hospital1, and I started searching the classified ads in the New York Times.

Macy’s was running an ad for computer programmers. A college degree was a prerequisite, but no programming experience was required. As part of the hiring process, Macy’s administered a test for programming aptitude—mostly for ability to understand symbolic logic. I aced it, and I was hired into a group of about 25 applications and systems programmers.

Interestingly, Macy’s Data Processing (DP) department (as it was known before MIS or IT became the common term) only hired trainees. Even the department head, Joel Thayer, had started as a trainee. (In addition to his job responsibilities, Mr. Thayer was also the head of the roller-skating clown act in Macy’s annual Thanksgiving Day Parade. I regret that I was invited to join but never did.)

Go Talk to the Users

COBOL and IBM assembly language were the two programming languages in use then at Macy’s. I’d had a little assembler language training but no COBOL. So, Mr. Thayer sat me down at my desk2 and gave me two COBOL training manuals to study. After a day and a half, I got bored and told him I was ready for an assignment.

So, he brought me into his office3 and explained that we were a few months from the holiday season and that he had a quick project for me: Take Macy’s entire credit card account file and print “Holiday Money” coupons for eligible customers. These could be used by account holders like cash in the store, except that any purchases made would simply go on their credit card accounts. The thought was, if you give people something that feels like cash, they might spend more, or at least choose to shop at Macy’s rather than at a competitor’s store.

It was a pretty straightforward specification4, and I thought I could write the code5 based on Mr. Thayer’s verbal instructions. But first he told me, “Now, put on your sports jacket, and go down and talk to Richard Miles, the VP in the credit department, and be sure this is right.”

Less than two days on the job as a trainee, and I was going out (by myself!) to talk to a senior executive about his requirements. I’m pretty sure that Mr. Thayer knew it wasn’t necessary to have me do that interview, and, as expected, the meeting was uneventful. But Mr. Thayer was teaching me my first lesson.

Lesson Learned: Our job is not just about technology. It is about understanding the business, and you can only learn business requirements by getting close to the users. Keeping with this principle, every new programmer at Macy’s started with the role of programmer/analyst, not just programmer. Understanding user requirements was part of the job from your first day. Here’s the first way things have not changed.

Programming without a Computer

I can’t find evidence of this, but the old timers told me that Macy’s was only the second or third commercial organization to deploy a computer, one built by National Cash Register (NCR) in the early 1950s. Of course, in those days, virtually no one had experience in computer programming, so Macy’s trained its own programmers from the ranks of a department known as “Systems and Procedures.” (More about that department in a moment.)

At the time I was hired, there were still two or three of those first programmers working at Macy’s, now as DP managers. The most senior one, Abe Horstein, was the original programmer who wrote the code for the accounts receivable system, which managed all of Macy’s credit card operations.

Although Macy’s rewrote this system in the 1960s in COBOL for the IBM mainframe, Abe was still the go-to guy when we had questions on the program logic. To refresh his memory on particularly complex sections of the code, he would often reach down to the bottom drawer of his desk and pull out a three-ring binder with old dog-eared hand-drawn flow charts, which he called bubble charts, similar to what I’ve drawn nearby6.

During one such session with Abe, he told me a funny story. Macy’s contracted with NCR for that first computer, probably a year or so in advance of its being built. In the meantime, Macy’s wanted to get started on its top priority—computerizing that A/R system. Abe was tapped for the job. He learned to code and began a months-long effort to design and program the new system. The plan was to have Abe fly to California (I believe) to test his program prior to the new computer being shipped to New York.

Finally, the big day came. The computer was built, and Abe was ready to fly out west. But first, fearing the possibility of a plane crash, Macy’s top management insisted on photographing the hundreds of pages of computer code—but not as a backup. They actually kept the originals and sent Abe with the photocopies!

Ultimately, Abe got the program running and the new computer was shipped to New York, where it supported Macy’s A/R system7 By the time I left Macy’s two and a half years later, I was responsible for the nightly processing and maintenance of that system, now rewritten for the IBM 3608.

Lesson Learned: Some programmers today think that well-written code is its own documentation. I disagree. Well-written code can explain what the program does, but what is often missing is the “why.” Although program flow charts are usually not needed, there is still the need for documentation at a higher level, especially concerning the business logic. Throughout my career as a developer I always made an effort to document things I thought my successors would want to know.

Does Macy’s Tell Gimbel’s? 

My first programming assignment was successful. After I had finished, Mr. Thayer told me a funny story. As noted earlier, Gimbel’s department store was still across the street from Macy’s, and the two were rivals. In fact, “Does Macy’s tell Gimbel’s?” was, at the time, a common saying indicating that competitors do not share business secrets with one another.

Despite this intense rivalry as retailers, Mr. Thayer and his peer at Gimbel’s had somehow managed to enter into a friendly competition whereby each of them would attempt to find vulnerabilities in the store processes of the other.

To facilitate the competition, Mr. Thayer had applied for and had received a Gimbel’s credit card. As a result, Mr. Thayer had received, in the mail, Gimbel’s equivalent of “Holiday Money,” which he promptly used to purchase some small item at Gimbel’s. He then turned around the next day and returned the item and received his refund in cash. Bingo. If he had made the purchase by credit card, the returns department would have simply applied a credit to his card balance. But Gimbel’s treated Holiday Money as if it were really cash. In effect, he had gotten a cash withdrawal on his Gimbel’s credit card, something that should never have been allowed. Mr. Thayer then went over to visit the Gimbel’s DP manager to let him know about the vulnerability. It was, indeed, a friendly rivalry.

Lesson Learned: Take the opportunity whenever possible to learn from your industry peers, even if they are competitors. Of course, no one should share trade secrets with competitors, but we can and should learn from one another in areas such as IT security, technical standards, and open source, where we can all mutually benefit for “the greater good.”

“I Just Want a Hot Steak”

Earlier, I mentioned that Macy’s DP department had its roots in a group called the Systems and Procedures department. The word systems here did not refer to computer systems but to the manual systems that ran Macy’s store operations. They spent a lot of time designing forms, filing systems, and procedures. So, when computers became commercially available, this department was the natural group to implement them to automate those procedures.

Macy’s DP department still had this orientation toward business processes when I joined almost 25 years later. For example, one day Abe told me an interesting story while we were having lunch down in the employee cafeteria. He pointed to employees paying at the cash register. He remarked that a few years earlier he was frustrated that there would typically be a bottleneck in front of the cashier and that, by the time he sat down to eat, his lunch was cold. So, he wrote up a formal suggestion to rebalance the serving lines and cashier lines to remove the bottleneck. Today, we’d call this lean thinking. Abe’s solution didn’t involve computers, but it greatly improved the process by getting employees quickly to their tables after being served.

Macy’s employee suggestion program included a reward program for suggestions that were accepted and implemented. Abe’s suggestion was soon implemented, but he refused the reward. He said something to the effect of, “It’s my job to improve processes. I don’t want a reward. I just want a hot steak.”

Lesson Learned: Despite all the advances in technology, enterprise IT is still all about thinking in terms of business processes. If you’re going to be successful, you have to be like Abe, who was reengineering business processes even while at lunch, at least 20 years before Michael Hammer coined the word.

Human Costs Unavoidable

But process improvement wasn’t always painless. Sometimes it automated people out of their jobs.
For example, when I first started at Macy’s, there was a department of about 20 women who produced daily flash sales reports using large mechanical calculators, as shown nearby. This group sat right next to our DP department.

One morning as I walked to my desk, I saw that this entire group was gone—all the women and their calculating machines had vanished. I asked Abe what happened. “We wrote a system,” he replied. He didn’t say it in a cold way, but as if to say, it’s too bad but we have no choice.

The impact of the new system was great. It was much faster and more accurate than human calculators. Instead of tabulating data from paper receipts, data from cash registers could now be collected and fed into the IBM mainframe. This allowed daily sales to be reported more quickly, greatly improving management decision-making. If Macy’s didn’t do it, Gimbel’s surely would and no doubt did so.

Interestingly, Macy’s was the last place where I saw a human elevator operator. He was a kindly old man, who was still there when I left a year or so later. His job, of course, eventually was automated.
So, yes, there is a human cost. But as these jobs were being destroyed, new jobs, like mine and the rest of our team’s, were being created. As a result, unemployment today is actually lower than it was at the time Abe automated the flash sales department.

Lesson Learned: Today, robotics, machine learning, and other new technologies continue to automate jobs out of existence. Is there a human cost? Of course there is. Do workers need to be retrained? Yes. But if history is any guide, the labor market will adjust. Color me optimistic.

Equipped for the Next Chapter

After two and a half years, I was a pretty good COBOL programmer and passable in IBM assembly language—enough to qualify me for my next job, which involved a move cross-country.

Remember that vice president whom I interviewed about Holiday Money the first week on the job? He gave me a job recommendation that proved critical. But more importantly, I was equipped with important lessons learned in what it means to be a business analyst, understanding the business through the eyes of the users, and helping them improve business processes, themes that continued through the rest of my career.

Update: For the next chapter, see: What I Learned at TRW Credit Data about Enterprise IT. 


1My wife’s office was at the end of a long dark hallway that connected it to Bellevue Hospital, founded in 1736, making it the oldest public hospital in the US. At the time (and still now), it included a prison ward for treating inmates and a psychiatric ward. “You are a candidate for Bellevue” remains a family saying of ours to this day.

2The 24 programmer/analysts, like me, sat in two rows of 12 desks each, back to back, with no cubicles, no partitions. We were practicing open offices before they were a thing. 

3Unlike the rest of us, Mr. Thayer, as the director of the department, had the only closed-door office. The two managers under him, including my manager, Jack Krigstein, had little cubicles, each at the tail end of each row of programmers (i.e. they were looking at our backs). It was tight quarters. It was also an austere environment. Forget free snacks, we didn't even have free coffee, or any coffee in the office, for that matter. A couple months before I left, the whole department chipped in to buy a coffee maker, but only used it for hot water to make instant coffee. 

4There was an interesting wrinkle to the specification. The preprinted forms were designed “two-up,” meaning that as the form passed through the printer, the program needed to print them with two customers side by side. But to take advantage of bulk mailing rates, it would also need to print them in zip code sequence. The way the splitting and bursting machine worked, it would need me to export the necessary data and count the number of customers, then sort the file in zip code sequence and split it exactly in half, then merge the second half side-by-side with the first half so they could be printed two-up. So, it was not a trivial first assignment.

5Program coding back then was all handwritten. Look down those two rows of desks and you wouldn’t see a single desktop computer, not even a dumb terminal. We wrote all our code in pencil, on standard programming forms. If you wrote out a program and then decided to reorganize it, you got out a scissors and scotch tape (literally, a cut-and-paste). We turned in our handwritten forms to keypunchers, who punched them onto 80 column card stock, which we would submit to the computer room for compilation. A few months after I was hired, we got direct access to the card punch machines, which saved us quite a bit of turnaround time for code changes. Source code libraries were just around the corner, but at this time we filed the physical card decks in cabinets, along with the compilation printouts, and the generated object code (also on card stock).

6I have not been able to find an example of Abe’s style of flow-charting anywhere on the Internet. This must have been a style from the very earliest days of programming. IBM later popularized a more complex version with various shapes, each of which had a particular meaning. In my opinion, Abe’s style was simpler and more useful, and I reverted to it from time to time even years later.

7Today, nearly no one would dream of writing a custom A/R system. But this was standard practice back then. When I joined Macy’s in 1974, every single business application in the company was custom-written, even payroll and general ledger. I didn’t encounter my first commercial software package until four years and two jobs later.

8Although the IBM 360 was a tremendous step forward from previous generations of NCR and IBM mainframes, core memory was just 125K. That’s kilobytes, not megabytes. So we were always looking for ways to optimize our code and save a few bytes here or there. A few years earlier, when memory was even more constrained, one programmer had come up with a unique way to reduce memory requirements for a batch program. Instead of including the end-of-run logic as part of the program, he compiled that logic and stored it as executable code in the last record of the input file, which was on tape. When the program got to that record, it would load the record into core memory and branch to it to execute the end-of-run logic. It was a brilliant solution, but it was extremely difficult to maintain.

Image Credits:

1: Macy's storefront today, Paulo JC Nogueira
2: Roller skating clown, Diariocritico de Venezuela
3: Example of bubble chart format, from author's memory. (Not an original of Abe's).
4: Burroughs Adding Machine: Chris Kennedy. 

Monday, June 10, 2019

Getting ERP Users to Upgrade—Cloud vs. Traditional Systems

One of the great challenges facing traditional ERP vendors is getting customers to keep up with the latest version. Cloud ERP systems are supposed to solve this problem, by making the vendor responsible for upgrades and keeping all customers on a single version.

However, sometimes, even SaaS providers need to make changes that are so significant and potentially disruptive that customers resist the change.

Read the rest of this post on the Strativa blog: Getting ERP Users to Upgrade—Cloud vs. Traditional Systems

Thursday, April 25, 2019

What Is Digital Transformation, and How Do We Get There?

In enterprise technology, digital transformation is a hot topic. But what does it really mean? Overused by vendors and consultants, the phrase has become nearly meaningless.

This needs to change. Digital transformation should be something practical and tangible, something in reach of all organizations—provided business leaders make a sustained effort.

This post provides a simple definition of digital transformation, breaks down the main types of digital transformation, and recommends an approach for developing a digital transformation strategy.

Read this post on the Strativa blog:  What Is Digital Transformation, and How Do We Get There?

Wednesday, April 17, 2019

Google Getting Serious about Enterprise IT

This just in: Google has just announced hiring of Rob Enslin as President of Global Customer Operations for its Google Cloud unit.

Why is this a big deal? Because Enslin only in the past month announced his departure from SAP, where he spent 27 years and was most recently in charge of SAP's entire cloud portfolio. He was also a member of SAP's executive board.

Enslin will be reporting to Thomas Kurian, who was recently hired on by Google as the CEO of Google Cloud. Kurian, of course, was highly regarded during his 22 year career at Oracle, where he was most recently the President of Product Development. He was also the brains behind Oracle's Fusion line of cloud applications, which represent Oracle's future as a cloud applications services provider.

Kurian writes:
Today, it is my pleasure to introduce Robert Enslin, Google Cloud’s new President of Global Customer Operations. Rob’s expertise in building and running organizations globally, business acumen and deep customer and partner relationships make him a perfect fit for this crucial role. Rob will report to me, and he starts on April 22. Rob spent the last 27 years at SAP in leadership roles across sales and operations, most recently as the President, Cloud Business Group and Executive Board Member. He developed and managed SAP’s entire cloud product portfolio, led the field revenue and enablement efforts across multiple geographies, and oversaw core functions including professional services, ecosystem, channel, and solutions. Rob brings great international experience to his role having worked in South Africa, Europe, Asia and the United States—this global perspective will be invaluable as we expand Google Cloud into established industries and growth markets around the world.
Just today in a private message a fellow analyst said, in another context, that the "enterprise software boat is being rocked." I replied that it needs to be rocked, and maybe it needs to be capsized.

Perhaps Google getting serious about enterprise technology is just what the market needs. For now, Google's immediate objective appears to be to take on Amazon and Microsoft for cloud infrastructure services. But with hiring of Kurian and Enslin, will Google also start moving into enterprise applications? Or will it be content to just be a platform provider>

Watch for who are the next new hires. That will give us a clue.

Tuesday, February 12, 2019

When Are On-Premises Systems Justified?

There is near-universal agreement that cloud computing is the future for enterprise IT. Our research at Computer Economics certainly indicates so. In just one year, our annual IT spending survey showed the percentage of IT organizations with 25% or fewer of their application systems in the cloud declined from 72% in 2017 to 61% in 2018. We expect a further decline this year.

Four Factors Favoring On-Premises

Even though the trend is strongly in the direction of cloud, are there situations where on-premises deployment is still justified? In a recent article, Joe McKendrick outlines four situations where staying on-premises may be preferable to cloud, at least for now. He writes:
To explore the issues of when staying on-premises versus cloud makes sense, I asked industry executives about any areas that were not suitable for cloud, and better left on-premises -- especially from the all-important data perspective. The security implications, as well as geographical presence requirements, are obvious. But there are also other facts that may make staying on-premises the most viable option.
Joe goes on to outline four factors:
  • Legacy entanglements: where the system is just one part of an integrated set of applications, especially where there are dependencies on certain database or platform versions. “Monolithic legacy applications” with custom system administration tools are another example.
  • Cloud sticker shock: where data storage requirements are so great that cloud deployment is simply not economical.
  • Security: where “some data cannot risk even a hint of exposure.”
  • Need for speed: where large data sets are maintained for “real-time user data interaction, high-speed analytics, personalization, or recommendation.” Some IoT applications may fall in this category. 

The Four Factors Not as Great as They Once Were

While these four factors are worth considering in a cloud vs. on-premises decision, I find them to be less of a factor than they were even a few years ago.
  1. The legacy system factor is certainly reasonable in some situations. To this I would add, staying on-premises may be justified when requirements for a new system can more easily be accommodated with an add-on to the legacy system. Be careful with this, however, as this can be a prescription for further entrenchment of the legacy system.
  2. In my view, cloud sticker shock is only a factor for a small percentage of cases, perhaps for very large data sets. Declining costs of cloud storage should lead to fewer instances where this is a legitimate objection. Often, IT leaders making a case for on-premises systems based on cost are not factoring in all costs, such as the cost of personnel to maintain and back up that on-premises storage.  
  3. The security factor I find to be largely an excuse. Although business leaders often underestimate the impact of a potential security breach, they also tend to overestimate the capabilities of their own security staff members, processes, and technology. The level of security maintained by internal IT organizations is usually far less than what is achieved by cloud services providers. If one of the big three credit data providers (Experian) could not protect consumer data maintained on-premises, what makes you think that your security capabilities are greater?
  4. The need for speed, in some cases, may be a legitimate reason for keeping some systems on-premises. However, most enterprise applications do not have this requirement. Even manufacturing execution systems—systems with low latency requirements—have been successfully deployed by cloud applications providers, such as Plex. In other cases, local buffering of data may be possible to accommodate any latency between the local system and the cloud provider. In such cases, it may be better to make investments in high-speed data communications, with redundancy, rather than continue to maintain such systems in local data centers. 
There is one more factor in favor of on-premises systems: Where there are regulatory requirements that the organization demonstrate control over the production environment. This includes FDA-regulated companies where a system is used to support regulated processes, such as quality control in medical device or pharmaceutical manufacturing. Although it may be possible to meet the requirement in a multi-tenant cloud environment, many regulatory affairs professionals are more comfortable not fighting that battle. In such cases, it may justify an on-premises deployment or at least a single-tenant hosted deployment where control of the production environment can be more readily assured.

Cloud First the Best Strategy

As discussed, there are situations where a true on-premises systems may be legitimately justified, although the case is getting weaker year by year. Nevertheless, for most new systems, business leaders should be adopting a “cloud-first” strategy, even if "cloud only" is not practical for now. If there is a cloud solution that will meet business requirements, that should be the preferred path forward. The advantages of cloud systems, especially in terms of alleviating the burden of system upgrades, are too great to ignore. On the other hand, if no true cloud system meets business requirements, or there are other limiting considerations, an on-premises solution may be a legitimate option. But even then, we would prefer to see a hosted solution, in order to achieve some of the benefits of getting application systems out of on-premises data centers.

Wednesday, January 16, 2019

Why Is Open Source Not More Successful for Enterprise Applications?

Although open source software now completely dominates some categories of software, this has not been true for enterprise applications, such as ERP or CRM. What is it about enterprise applications that makes them so resistant to open source as a business model? 

My friend and fellow-analyst Holger Mueller has a good post on Why Open Source Has Won, and Will Keep Winning.  Read the whole thing. In Holger's view, which I agree with, the battle between open source and propriety software is over, and open source won. In just a short fifteen years or so, it is hard to find any commercial software vendor attempting to build new platforms based on proprietary code. He writes:
Somewhere in the early 2000s, Oracle dropped its multi-year, 1000+ FTE effort of an application server… to use Apache going forward… that was my eye opener as a product developer. My eye opener as an analyst was in 2013, when IBM’s Danny Sabbah shared that IBM was basing its next generation PaaS, BlueMix, on CloudFoundry… so, when enterprise software giants cannot afford to out-innovate open source platforms, it was clear that open source war-winning. As of today, there is no 1000+ people engineering effort for platform software that has started (and made public) built inhouse and proprietary by any vendor. The largest inhouse projects that are happening now in enterprises, the NFV projects at the Telco’s, are all based on open source.
Holger's observation is certainly true for software at the platform or infrastructure level of the technology stack. All the examples that Holger cites, and nearly any other that he could cite, are in these categories.

But what about enterprise business applications, such as ERP or CRM. One of the best examples is SugarCRM, but even there, it lags far behind the market leaders. Open source ERP is in even worse shape. Players such as Compiere (now owned by Consona), Adempiere (a fork of Compiere), Opentaps (an ERP and CRM system), xTuple (formerly, OpenMFG), and Odoo (formerly, OpenERP) barely move the needle in terms of market share. Where is the Linux of ERP?

Since the early 2000s, I have been hoping that open source would catch on as an alternative to the major enterprise apps vendors, such as SAP, Oracle, Microsoft, Infor, and others. I would like to see open source as a counterweight to the major vendors, putting more market power on the side of buyers. 

So, why hasn't open source been more of a contender in enterprise applications?  I can think of three factors, for a start.
  1. Open source needs a large set of potential users. But enterprise applications do not have as broad a potential user base as infrastructure software. Although the ERP market is huge, when you break it down by specific industries, it is small compared to the market for, say, Linux.
  2. Enterprise apps require a large effort in marketing and sales. Buyers put great weight on name recognition. But open source projects do not generally show much interest in the sales and marketing side of a business. If a project is truly community-developed, who is interested in marketing it? As a result, very few people know what Odoo is, for example, let alone, how to acquire it.
  3. Open source is labor-intensive. It is great for organizations that have time but no money. My impression is that open source ERP adoption is somewhat more successful in some developing countries, where there are very smart people with good technical skills willing to spend the time to implement a low-cost or no-cost solution. Here in the U.S., such companies are rare. Most would rather write a check. 
Ironically, open source is very popular among enterprise application providers themselves. Software vendors, whether cloud or on-premises providers, love open source and many now build nearly all of their systems on it, because it scales economically. Yet, when they sell their own enterprise applications, the last thing they want to do is offer them as open source.

So, why hasn't open source been more successful for enterprise applications? Perhaps readers can come up with other reasons. Please leave a comment on this post, or tweet me (@fscavo), or email me (my email is in the right hand column).

Update, Jan. 18: My friend Josh Greenbaum has posted a lengthy response on his blog, here: Open Source, Enterprise Software, and Free Lumber. Please read the whole thing, as it is quite thoughtful.

Josh agrees that open source software (OSS) has been more successful for infrastructure components than for enterprise applications. But he goes off in a different direction to argue that it's not right for commercial software vendors to make money from their use of OSS. I have two basic disagreements with Josh on this point. First, most OSS licenses (GNU, for example) mandate that creation of software incorporating the OSS must be provided under the same OSS license. So commercial software providers go to great lengths to ensure that their developers do NOT incorporate OSS into their software products. Now, OSS providers CAN incorporate OSS in their own operations (e.g. use of Linux or MariaDB in their provision of cloud services), and they can include OSS as a supported platform.. In both cases they are not violating the terms of the OSS license.

My second disagreement is that Josh objects to OSS on what I consider to be more or less moral grounds, that it is wrong for others to make money from the free contributions of others (a "sucker's game," he calls it). Putting aside the fact that commercial software providers (think, IBM, Microsoft, Facebook, Google, and hundreds of others) are the largest contributors by far to OSS, no one is holding a gun to the head of any individual developer forcing him or her to work for free. If OSS contributors find it acceptable for others to make free use of their labors, who am I to say that it is wrong for others to do so? The fact that OSS has been wildly successful (at least for infrastructure-like components) tells us that there must be something in the economic model of open source that works to benefit both contributors and users of OSS.

Update, Jan 18: Some email correspondence from my friend Vinnie Mirchandani, led me to send this email reply, lightly edited here:
Yes, the large tech vendors, such as Google, Microsoft, IBM, etc., have benefited enormously from open source, but they also contribute enormously to open source projects, because it is in their best interest to do so. You know that IBM contributed key IP from its decades-old work in virtualization. Microsoft open sourced Visual Studios Code, and it is now one of the most widely-adopted development environment. Oracle, IBM, and others contribute to Linux because it ensures that it runs on and is optimized for their hardware. They all contribute because it is in their self interest to do so. Moreover, senior open source developers, especially those who have commit-privileges, are in high demand and are often hired by these same large tech companies. So the whole open source movement has become a virtuous ecosystem where everyone benefits.

Update, Jan 19: Over at Diginomica, Dennis Howlett riffs on our discussion in, Why you should take notice of the open source in enterprise suckers conundrum. On my question of why open source has not been more successful in enterprise applications, he points to the lack of real marketing and sales efforts. He writes:
I’d go one step further and as a nuanced view of Frank’s (2) element. In most cases, enterprise software is sold, it’s not bought. What I mean is that troupes of vendor reps, marketers and other hangers on line up to convince you about taking on one or other solution. In the open source world you are ‘buying’ not being sold. There is no real money for marketing and sales. You either take it (for free) and then work on it yourself, or you enlist the help of specialists who both understand your processes and the software code itself. And despite the early success of Salesforce as a cloud vendor from whom you bought applications at the departmental level on your credit card, the majority of enterprise deals are sold.

Thursday, January 10, 2019

Friction in Cloud Services Contracting

If I can sign up for your cloud service without human interaction, why do you make me contact your billing department to cancel or downgrade my subscription? 

This question came to mind after an experience I had this week dealing with a well-known SaaS provider this week. The provider, who will remain unnamed, is well-known in the market for "collaboration."

Although this provider serves many large companies, it is also a good choice for small work groups, which is how my consulting firm is using this service. There is a free trial version, which will convert to a paid subscription for a nominal per-user fee. As such, without interacting with a human being, you can sign up for a handful of users for an entry level service.

The experience this week started when our annual billing notice came in and I realized that we were paying for a level of service that was much greater than what we need.  So, I looked at the provider's website to see how to downgrade our level of service.

The instructions I found read, in effect, "Contact our billing department."

Now, to the provider's credit, the billing department handled my request in a fairly efficient matter, although of course they wanted to know why I wanted to downgrade, had they done anything wrong, did I realize all the benefits I was receiving, etc., etc. I believe I needed to respond to two, perhaps three, emails in order to accomplish the downgrade.

So the question remains: Why require this extra step?  If I can sign up via self-service, why can't I downgrade or cancel via self-service?

Monday, January 07, 2019

Why Are Median Salaries Falling for Some IT Job Positions?

Over at Computer Economics, we've just released our new IT Salary Report for 2019, and there are some findings that are a bit counter-intuitive.

For example, in a time of low unemployment and a strong economy, you would think IT salaries would be strongly rising across the board. But that is not the case. Although across all IT jobs, salaries are rising at the median, for some job positions, the national average salaries are actually falling.

As we write in a Research Byte for the new report. 
Another factor we are seeing is that the salaries of new hires are decreasing. This usually does not happen in a strong economy. However, many IT workers are migrating from high-cost-of-living cities to places such as Nevada, Idaho, Oregon, Colorado, North Carolina, and Florida, where they usually earn a lower salary but enjoy a much-lower cost of living. Many employers have also been moving their operations to these same low-cost areas. In terms of real dollars, salaries might not be increasing as quickly, but workers are still seeing benefits.

“National medians are useful for determining the general direction of the economy or hiring, however this year, more than ever, it is best to look at salaries at a regional level,” said Tom Dunlap, director of research for Computer Economics, based in Irvine, Calif. “As major cities vie to attract employers looking to take advantage of lower costs and new supplies of talent, salaries will be in flux.”  
In economic analysis, one of the mistakes people make is to fail to recognize the rational decisions that individuals and organizations make in response to incentives (or disincentives). There is no doubt that the cost of living--and the cost of doing business--in some parts of the U.S. have gotten ridiculous. Think of the San Francisco Bay Area, for example.

As housing prices, and office space lease rates go through the roof, what are the logical choices? For both individuals and companies, it is to relocate to cheaper metropolitan areas--whether it be across the Bay, or out of state to Nevada or Texas. 

The option of remote work has made this choice much easier. Over the past decade, an increasing percentage of the IT workforce has been working remotely, whether telecommuting a few days a week, or moving out-of-state altogether.

It used to be that relocation meant turnover. Today, not as much so. For strong performers, employers are often willing to let them relocate and work remotely. The business keeps a strong performer, and the employee sees his or her salary go farther. Or, organizations may relocate to a more cost-effective location and allow many of their employees stay put.

Ultimately, salaries find their natural level, and that's what we think is going on right now for some IT positions.

The salary trends introduction to our IT salary report is available at no charge on our website. It also comes with a sample of our salary tables.