Thursday, December 05, 2013

Supply Chain Management in the Era of Social Business

Applications of social networking are easy to see in the business-to-consumer space, in functions, such as sales, marketing, and customer service. But is there also a role for social tools in heavy back-office B2B processes? At first glance, the applications may not be apparent. But when the word “collaboration” is substituted for “social,” we can see that B2B organizations made use of these technologies long before the word “social” came into vogue. Think Lotus Notes, for example.

Nevertheless, the opportunities for social business are growing, and nowhere do I see a greater need than in supply chain management, specifically planning systems.

Most supply chain planning (SCP) systems today are not social. Rather, they are oriented around the job of an individual planner, who works with user interface that strongly resembles an Excel spreadsheet. Rows show demand and supply, with columns indicating time periods, left to right, marching into the future. Highlighting is used to indicate periods where there are shortages of resources, whether material, capacity, or other elements of production. Exception messages alert the planner to take action. Except for better graphics, the user experience is not much different from that of MRP systems that I worked with and taught in the 1970s and 80s.

What’s Wrong with Spreadsheets?

The spreadsheet paradigm has survived for decades because it does have its strengths. First, it is familiar to anyone trained in principles of supply chain management. Second, it allows a lot of information to be conveyed on a single page.

The issue comes in the “take action” part of the planner’s job, especially when an action affects other participants in the supply chain, such as customers, suppliers, or sub-contractors. For example, a planner may be trying to resolve an issue with a late order. Taking action in this case might mean paying premium freight to expedite a supplier order, rescheduling production, shorting another customer, scheduling overtime, or any number of exceptional actions. The problem is that such decisions can rarely be made by a single individual. They require collaboration and approval by various other players inside and outside the organization. At this point, the planner turns from the SCP system and picks up the telephone, sends an email, or convenes a meeting.

Traditional SCP systems are good for identifying the problem, and they are good for recording the decision. But they are not good as a platform for collaboration to discuss the problem and to make a decision. Supply chain collaboration is not simply a matter of “getting approval.” These are content-rich collaborations, often requiring analysis of what-if scenarios and tradeoffs between competing metrics and objectives.

In other words, today’s SCP systems are systems of transactions, not “systems of engagement” (to use the term coined by Geoffrey Moore).

What Does Social SCP Look Like?

I got a little insight into what the next generation of SCP systems might look like, when I attended the Kinaxis user conference last month in Scottsdale, AZ.

By way of background, Kinaxis provides a supply chain planning system, dubbed Rapid Response. The company was founded in the early 1990s and has been through several name changes, most recently from Webplan to Kinaxis in 2005. Kinaxis was developing in-memory software long before in-memory became an industry buzzword. The firm also moved to a cloud delivery model in the late 1990s, around the same time that and NetSuite were starting out. Kinaxis has been successful selling into large companies with complex supply chains and competes directly against SAP, Oracle, as well as other best-of-breed specialists that vie for this market.

During the half day of analyst briefings, Kinaxis executives put up some screen shots of a new user interface that the company is considering. Although they did not use the word “social” to describe their objectives, I immediately saw the embedded social aspects of the new user interface.
  • Automatic team selection. In a large organization, it is not always readily apparent who needs to be involved in a certain supply chain decision. Knowing who should be involved on the customer and supplier side can be even more difficult. The prototype role-based dashboard automatically tells the planner or other user who needs to be involved—inside and outside the organization—in deciding each proposed action.
  • Business intelligence in context. For each supply chain decision needed, the demo UI allows each participant to see the impact of the proposed action on the business and on other people. So, there’s no need to leave the application to look up relevant information. In this way, the system promotes cross-functional alignment and consensus.
  • System of engagement. The new UI does more than just record the transaction. It captures team voting, comments, and assumptions, which are traditionally done outside the formal system. 
  • Cross-device access. No more waiting until you get back to your desk. The new UI automatically reformats itself across desktop, tablet, and smart phone displays, allowing access anywhere, any time. Going beyond the Apple/Google operating systems that many vendors support, Kinaxis also supports Blackberry and Microsoft mobile platforms.
  • Light gamification. When the team arrives at a decision for a given case, the alternative scenarios fall off the display, like sticky notes falling from a whiteboard, and the word “Closed” is stamped on the case—a little visual reward for resolving the case. Though I didn’t see it in the demonstration, I can envision a leader board for each functional group, showing number of cases resolved and other metrics that the organization deems important.

Embedded Collaboration vs. a General Purpose Tool

To be fair, Kinaxis is not the first to seek application of social business principles to the supply chain. However, most attempts thus far have involved general purpose tools, such as Microsoft’s Sharepoint or Yammer, or’s Chatter to capture collaboration among trading partners. There has also been talk about the use of social media sites such as Twitter to monitor or rapidly communicate events that may affect availability of material, for example. But these just scratch the surface of what is possible.

But, using a general purpose social tool requires the planner to use one system for planning and another for collaboration, with little or no connection between them. So, when supply chain professionals are in the planning system, they can’t collaborate, and when they are in the collaboration system, they can’t plan.

In contrast, the social business capability being considered by Kinaxis is not some general purpose activity feed layered on top of the application. Rather, it is embedded in the application itself. The automated team selection solves a real problem in large complex supply chains. The discussion thread is natively embedded as part of the application and is focused on specific decisions to be made. There are no side discussions about pet cats or who’s bringing what to the company picnic. If those things are important, let them be relegated to Chatter or Yammer and keep the SCP discussion focused on taking supply chain actions.

The prototype coming out of the lab at Kinaxis gives a clear view of what is possible in in putting social business constructs into supply chain planning. It helps that Kinaxis has built a complete SCP solution from top to bottom as a single system, as opposed to building it up from acquired components. With a single in-memory system, Kinaxis can more readily provide all the information at the same time to all participants. There is no cascading of plans sequentially from one level to another: all levels are planned concurrently.

Does this mean that SCP vendors need to give up the spreadsheet paradigm? Not at all. My advice would be for vendors to continue to use the spreadsheet user interface. As I noted, it does have its benefits. But the “social SCP” paradigm needs to be introduced alongside the spreadsheet. In this way, long-time SCP users can continue to work with the interface they have grown up with, and at the same time, be introduced to a different paradigm. User interface changes can be quite unnerving for long-time system users. A parallel approach will make the transition easier.

You can watch the full video of the prototype user interface by clicking the graphic below (free registration required).

Related Posts

Supply Chain Management Delivers Positive ROI Despite
Breakthrough in Material Planning: Demand Driven MRP

Tuesday, October 22, 2013

Open Source Not a Panacea for Cloud Infrastructure Decisions

Cloud IaaS Open Source
When it comes to cloud computing, do open systems win out over proprietary standards? My view is, perhaps in theory, but cloud computing--specifically public cloud infrastructure--has bigger problems right now than whether it's built on open source. Furthermore, open source cloud infrastructure providers have obstacles to overcome. 

I'm participating in an online video debate on October 29, hosted by IBM's Smarter Computing program, on "the pros and cons of open computing when it comes to cloud, big data, and software defined environments." This post outlines part of my viewpoint on this subject.

What's Not to Like about Open Source?

One of the problem in debating "open source" is that it is difficult to argue against the word "open" as a concept. For example, we all like to think of ourselves as open-minded, not close-minded. We admire top executives who have an open-door policy--have you ever heard of a manager with a "closed door policy?" In home-buying, sellers like to point out the open floor plan. Who ever advertised a house as having a "closed" floor plan?

So also, in computing, open just sounds better. Moreover, when it comes to cloud infrastructure, open source projects such as OpenStack and CloudStack have admirable goals, such as the ease of porting computing workloads from one cloud provider to another, promoting competition, and escaping the dreaded vendor lock-in.

The Larger Issue: Adoption

But, to me, it is premature to debate about whether open source cloud infrastructure is better. The larger issue today is the small percentage of corporate IT organizations that embrace public cloud infrastructure at all. In our Technology Trends survey at Computer Economics last year, we found that less than 10% of IT organizations worldwide have any use or plans to use public cloud infrastructure. Moreover, of these, only half claim use it, or intend to use it, for production systems.

If they are not using public cloud for production systems, then what are they using it for? Our survey found interest in public cloud for software development and testing, disaster recovery capabilities (such as backup and recovery), or for archiving older data.

In addition, I question some of those production uses of IaaS. Discussions with associates who advise data center managers confirm my suspicions. One associate, who works a lot in the entertainment industry, pointed out that one popular use of cloud infrastructure is in rendering animated film. In this case, animators require enormous amounts of computing power and storage to render even a few minutes of animation. As it turns out, cloud infrastructure is perfect for such a use, as it frees the IT organization from having to maintain those high levels of computing resources, which are only used sporadically. Furthermore, the risk is low. If the cloud provider goes down in the middle of a rendering job, the animator can simply resubmit the job. Nothing is lost.

But when it comes to production systems, such as accounting systems or royalty processing, these same entertainment industry decision-makers shun cloud infrastructure. It is not that they want to keep such systems on-premises, as witnessed by the fact that they have been outsourcing their data centers to managed services providers for years. As my associate remarked, "CIOs don't want to be in the data center business any more." But, rightly or wrongly, they are cautious about entrusting production systems to a cloud infrastructure.

Open Source Not a Panacea

Although the goals of OpenStack and other open source cloud projects are admirable, they may be a solution in search of a problem.
  • Specifically, migrating workloads between competing cloud providers may not be as big a deal as open source proponents claim. Customer demands are already forcing competing cloud providers to recognize and support each other's APIs. For example, some members of the OpenStack community are urging support for Amazon's APIs.  If OpenStack fully goes this route, application systems written for Amazon's cloud will be able to be deployed on an OpenStack cloud without a lot of migration effort. Even VMware--the vendor with the largest stake in so-called private clouds--supports Amazon APIs and is also a contributor to OpenStack. Therefore, as far as I can tell, portability is not a major issue.
  • Second, so far, it does not seem as if proprietary cloud providers are using their proprietary standards in order to extract higher fees from customers. Quite to the contrary, cloud infrastructure is a very competitive market. Whatever concerns IT decision makers have about public cloud infrastructure, one thing they cannot complain about is its cost. Leading cloud providers are not raising prices--rather, they are cutting prices, in some cases many times a year. IT decision makers are not holding on to their on-premises systems because they are concerned about the cost of public cloud--they are focused on risk. This was also a key finding in our Technology Trends survey.
If a cloud provider wants to overcome enterprise IT buyer concerns, it should focus on reliability, security, privacy, and offer a well-staffed support group. Many of the OpenStack providers are doing exactly that. It may well be that OpenStack providers, such as IBM, H-P, Dell, Rackspace and others, will be successful because of their value-added services, not because they embraced an open source infrastructure.

Incumbent Infrastructure Providers Have an Edge

Furthermore, proponents of open source cloud infrastructure may be underestimating the advantage that on-premises infrastructure providers have in moving their customers to the public cloud. Although, as discussed above, IT leaders have concerns about moving production workloads to the public cloud, one thing that does appeal to some of them is the ability to move seamlessly from on-premises system instances to cloud instances.

This is the so-called hybrid cloud infrastructure. CIOs may adopt a hybrid cloud strategy in order to move non-critical workloads out of the data center, freeing up system resources (e.g. the animation rendering application discussed above), or to "burst" to the cloud during period of high demand for system resources (e.g. during a major advertising campaign that strains an in-house e-commerce system).

Now, which provider has the advantage in helping IT organizations set up hybrid cloud capabilities? The provider that is already serving the on-premises data center (Microsoft, VMware, or Oracle, for example) or the one that would like the data center to replatform its on-premises systems to match the infrastructure of the provider's cloud infrastructure (e.g. OpenStack, CloudStack)?

The answer is obvious, which is why Microsoft, VMware, and Oracle are all providing public cloud services that require very little change to the customer's on-premises infrastructure. Unless an IT organization is building a data center from scratch, it is unlikely to want to standardize its internal infrastructure on a completely new technology--open source or otherwise.

Advocating for Cloud and Open Source

Nothing I've written here should be taken as an argument against cloud computing or open source. I've been blogging on these subjects since 2002 and consider myself as an advocate of both. In my view, one day nearly all systems will be delivered via cloud computing, and open source software has proven itself to be a viable business model for a variety of software categories, especially for lower levels in the technology stack. But in the case of public cloud infrastructure, I don't see open source cloud projects as dominating the market any time soon.

Update: The video of my IBM debate is now online. You can watch it by clicking the image below.

Related Posts

The Inexorable Dominance of Cloud Computing
Cutting Through the Fog of Cloud Computing Definitions

Photo Credit: Flickr, followtheseinstructions

Monday, September 23, 2013

Best Practices for SaaS Upgrades as Seen in Workday's Approach

If you're involved with enterprise software, you need to pay attention to what Workday is doing--even if you're not interested in HR or financial systems. Because Workday is one of the best examples of how enterprise applications can and should be delivered in the cloud.

This was one point I took away from Workday's annual user conference in San Francisco and from a day-long series of briefings for industry analysts earlier this month. 

The differences between Workday's practices and the approach of traditional enterprise software vendors are striking. There are several points of contrast, but in this post I'd like to focus on how Workday delivers software upgrades and some new twists in how it does this.

Traditional Approach to Software Upgrades

In the traditional enterprise software model, vendors develop new versions and provide them to their customers that are under maintenance agreements. The customer takes delivery of the new version, installs it on a test copy of the system, migrates data from the existing production version, retrofits any customizations or interfaces with other systems, revises its user procedures, performs system testing,  and migrates all of its users to the new version. In the process, if there is any time left in the schedule, the customer also may investigate how it would like to use any new functionality offered in the new version.

The bottom line is that in the traditional model, software upgrades are both a technical exercise as well as a business exercise. The technical challenges of data migration, retrofitting of customizations, and reworking system interfaces can be significant and can encourage customers to stay on older versions of a vendor's system for many years. When such a customer finally wants to get current on the latest version, the upgrade process can rival the time and expense of the original implementation. The technical aspect can be so much work that companies often retain outside service providers to manage or assist in the effort. The business aspects--accommodating changes to business processes or embracing new functionality--are often jettisoned for the sake of simply getting the new version installed from a technical perspective. As a result, customers often do not realize the benefits of the new functionality that the vendor offers.

The Workday Approach

Workday's approach to upgrades, from the beginning, is simple: it takes responsibility for all technical aspects of the software upgrade, allowing the customer to focus solely on the business aspects. There are at least three reasons that Workday can do this:
  • Workday's object model allows most customizations to be brought forward to new versions of the system with little or no retrofitting.
  • Likewise, Workday's Integration Cloud, based on technology it obtained through its acquisition of Cape Clear,  allow most custom integrations to continue to work with new versions of its system.
  • Since Workday operates the system on behalf of the customer, Workday takes all responsibility for migrating the customer's data to the new version. 
The impact of this last point should not be underestimated. Last year, Workday's CTO, Stan Swete, wrote about how important it is for the SaaS provider to take full responsibility for migrating customer data to new versions: 
[The] Software-as-a-Service (SaaS) model improves service delivery quality by letting the provider own the end-to-end process of development, conversion, and deployment. In the on-premise software world the vendor controls development (and associated QA), but there is a hand off for conversion and deployment. At Workday, the update process is not done until every customer is on the new version. The same team that project manages our development also project manages conversion and deployment.
When it comes to version upgrades, not all SaaS providers are created equal. Some are little more than single tenant hosting providers. Others are multi-tenant SaaS providers, but they deploy new versions as separate instances of the system and allow customers to stay on older versions for long periods of time. This makes version upgrades considerably more difficult if and when customers do decide to upgrade. Workday, as discussed, is at the other end of the spectrum, keeping all customers current on the latest version., NetSuite, and Plex, are similar to Workday in this regard, though they may differ in the details of how they do it.

    Further Improvements in Workday's Approach

    This year, Workday has further refined its approach to version upgrades in three ways:
    1. Single production instance for all versions. Previously, Workday would deploy a new version of Workday as a system instance that was separate from the previous version, and Workday would migrate customers in waves from the old version to the new version over a three week period. Workday's new approach is for the current version and the new version to exist simultaneously on the same system instance. Workday will now move customers to the new version by means of a set of "switches" that dictate which features of the system the customer will see. This new approach is possible because of Workday's object orientation discussed earlier.
    2. Continuous development and deployment of new functionality. Instead of holding all functionality enhancements for its periodic version upgrade, Workday is now introducing smaller changes on a weekly basis. This is especially important for small but high-priority changes or for tax and regulatory updates. Contrast this to the traditional vendors, who required many months or years between the time customers request changes and the time they actually see them in updated versions.
    3. Continuous conversion of customer data. As Workday develops new features that require changes to its data model, the single production instance now allows Workday to convert customer data in the background in advance of actually migrating customers to the new version. This reduces the amount of downtime required during the when the customer is moved to the new version. 
    4. Preview instance. Now that there is a single production instance and continuous conversion of customer data, Workday is now able to offer customers a preview instance of the new version, giving customers a longer time-frame in which to evaluate and plan for the new version. Under the traditional model, customers only get a hands-on look at the new version when they take delivery of the upgrade, install it, and convert their data to it in a prototype environment. Workday's approach gives customers much more time and encourages them to make use of the new functionality.
     Swete summarized these changes in a blog post during the user conference:
    Probably the best example of embracing continuous change is happening on the service delivery side of our business. Workday has moved to continuous deployment of new features to a single code line. This move, along with the continuous background conversion of data for new features, enables us to complete updates for our production customers with less scheduled downtime. Application of changes to a single code line reduces the expense of maintaining multiple code lines around each update we do. Moving to continuous deployment also gives us the flexibility to continue to respond to our customers’ requirements when it comes to the number of updates we do each year.
    As Swete indicates, the single production instance, continual development approach, and continuous conversion of customer data allow Workday to scale back from three new major versions a year to just two. The conference audience applauded when co-CEO Aneel Bhusri made this announcement, perhaps indicating that many companies have difficulty absorbing three major upgrades a year. At first blush, the reduction in the number of new versions a year would imply that Workday is slowing down the number of new features per year. But in an sidebar conversations with Workday executives the next day, it became clear that these most recent improvements actually mean that Workday will be introducing more new features each year. The difference is that the smaller changes will be trickled-in on a weekly basis, while major new features will be held for the twice-yearly updates.  As indicated earlier, this approach also allows Workday to accommodate regulatory or tax-law changes on short notice, which have become more common in recent years.

    Workday's core strategy of reducing or even eliminating the technical burden of version upgrades is a best practice for SaaS providers, allowing customers to focus exclusively on business improvement and maximizing the value of their system investment. More SaaS providers should follow this example.

    Postscript: Over at Diginomica, Phil Wainwright has two good posts covering some of these same points:
    Note: Workday covered my travel expenses for attending its user conference.

    Update, March 19, 2014: This Workday post by David Clarke provides a detailed explanation of Workday's single codeline development process.   

    Related Posts

    The Simplicity and Agility of Zero-Upgrades in Cloud ERP

    Sunday, September 08, 2013

    With SaaS, the Software is Not the Only Service Needed

    Software-as-a-Service (SaaS) simplifies much of the complexity involved in implementing and using enterprise software. However, in consulting on several SaaS selection projects over the past two years, I've grown concerned that some SaaS providers may be neglecting some of the key elements of success for buyers.

    (Please note, that in this post, I am not addressing the distinction between multi-tenant and single-tenant hosted systems. Although there are important differences, my concern about services transcends this distinction and applies to both.)

    As the name implies, software-as-a-service (SaaS) turns software into a service. No longer does the buyer need to install software in its on-premises data centers. Nor does the buyer need to provide its own day-to-day internal support for maintaining and operating the application infrastructure. The entire system is delivered to users "as a service" by means of a network connection. 

    But is the software the only service that SaaS buyers require? It doesn't matter whether it's SaaS or on-premise. These systems do not implement themselves. What about implementation services, such as project team training, help with prototyping, data migration, end-user training, acceptance testing and go-live support?

    Moreover, once the company goes live on the new system, what about on-going support? Is there a help desk to deal with problems, such as system unavailability or response time? What if a bug is uncovered or a patch needs to be applied? Who does the buyer turn to when there are questions about how the system operates? Is there good up-to-date system documentation and training materials?

    SaaS-Only Providers May Attempt Arms-Length Implementation Services

    Over the years, I've noticed a distinct difference in the selling approach of what I call the SaaS-only providers versus traditional enterprise software vendors. The SaaS-only players, being 100% committed to the online model, attempt to move as much of the selling process online as possible. For low-end applications such as survey software or email marketing, they offer free trials with online conversion to the paid service. For mid-level or higher-end applications (think, accounting systems or ERP), they offer self-directed online demonstrations and perhaps some sort of limited trial use of the system. If at all possible, to minimize cost of sales, they attempt to close the deal on the web or over the phone with as little on-site selling as possible. All of these sales methods are good, and I'd like to see the traditional vendors move also in this direction.

    The problem in my mind, however, is when vendors attempt to move their implementation services to this low-touch model. They try to use online computer-based training, web-based instructor-led training, and phone support, with as little on-site or personalized service as possible. This may work for lower-end applications, but when you move into those mid-level or higher-end applications, the customer can often be short-changed. It puts more responsibility on the buyer to organize its own resources for deployment.

    This may work for some small companies, but not all. Some simply need more hand-holding.

    Now, where I think the SaaS-only providers generally do a good job is in post-implementation services. Because these vendors are entirely web-based, they generally have good capabilities for ongoing support, such as self-help systems, user support communities, and web-based training. They also have much experience in migrating customers to new versions, which is far less painful than the upgrade cycles of traditional on-premises vendors.

    Traditional Vendors and Channel Partners May Not Be Good at Post-Go-Live Support

    The traditional vendors--and their channel partners--face the opposite problem. Their sales model has always been a high-touch model. They conduct face-to-face sales meetings and demonstrations. They bring services people into the process to help close the deal. They derive substantial revenue from implementation services, so they invest in those resources.

    What happens when these vendors offer a "cloud" or "hosted" version of their systems? This is where the traditional vendors and their channel partners risk falling down. They can sell their systems as they always have, but now, what about post-implementation ongoing support? The software developer often doesn't want to get involved in the day-to-day management of their customers' systems, so they push that responsibility to their VARs. The VARs, in turn, often cannot afford to invest in their own data centers, so they turn to data center hosting partners to operate the system. This arrangement can work, but the result can be a complex relationship:
    • The software vendor develops and issues new releases new versions of the software
    • The hosting provider operates the customer's system. 
    • The VAR helps the customer implement the software and provides day-to-day ongoing support for the customer, such as help desk services and resolving any issues with the hosting provider. They also provide services when customers need periodic version upgrades.
    The risk in this arrangement with largely with the VAR.  Their legacy is as implementation partners. Their experience is in projects. They come in, do a job, then leave. They do not have a culture of providing day-to-day support to their customers. Furthermore, these partners almost always have a mix of on-premises customers and hosted customers, with hosted customers forming a smaller or much-smaller percentage of business. They might have some help desk personnel to take calls, but they cannot dedicate technical resources just to the hosting customers. Rather they must use their implementation consultants when a customer has a routine problem. If the consultant who knows that customer's business is deep in the middle of another customer's go-live, the first customer's problem may go unresolved.

    Most of the SaaS-only providers do not have this problem. They develop the system, they host the system, and they provide day-to-day ongoing support for the system, including version upgrades. If there is a partner involved, it is usually only for initial implementation or help rolling out new functionality.

    What Should SaaS Buyers Do? 

    Seeing that there can be problems with both the SaaS-only providers and the traditional providers offering hosted versions, how can buyers minimize their risks? I would suggest that more due diligence is needed beyond what software buyers perform for on-premises enterprise systems.

    When considering SaaS-only providers:
    • In the sales presentation: observe whether the SaaS provider pushing an approach of mostly virtual services, claiming the system is so easy to implement that you don't require much help? If you are prepared to implement without much direct support, fine. Otherwise, you may be starved for resources when you most need them. 
    • In your reference checking, ask about the implementation experience. Who provided implementation services, the SaaS provider directly, or an implementation consulting firm? What type of support did they provide? Were their on-site services adequate? What do you wish you had done differently?
    When considering traditional vendors with hosted offerings:
    • In the sales presentation: observe whether the vendor mostly talking about the software and implementation services, or are they giving sufficient time to talk about on-going support after the go-live? This may indicate they are still thinking of themselves primarily as sales and implementation services providers, not as ongoing support providers.
    • In your reference checking: ask about the day-to-day experience with ongoing support. Does the provider schedule a lot of downtime for maintenance? Is there much unscheduled downtime? Do you ever have problems getting the right person on the phone to resolve issues?
    Of course, all these questions can be asked of all vendors. But you might consider a different emphasis depending on whether the vendor is a SaaS-only provider, or a traditional vendor with both on-premises and hosted offerings.

    Finally, if it's not in the contract, it doesn't exist. Be sure all of your needs are reflected in the actual contract and associated statements of work. If you're not experienced with negotiating these, seek help. 

    Related Posts

    IT Services in a SaaS World

    Thursday, July 11, 2013

    Microsoft Reorg: What Does It Mean for Dynamics?

    Dr. Qi Lu,
    Microsoft's Applications and
    Services Engineering Group
    CEO Steve Ballmer published a long-awaiting memo this morning announcing corporate-wide organizational changes at Microsoft.  Although the reorg includes changes across many Microsoft functions, what does it mean specifically for the Dynamics group, which is responsible for Microsoft's business applications?

    The changes for Dynamics appear minor, but there is much written between the lines.

    Ballmer wrote:
    Dynamics. Kirill Tatarinov will continue to run Dynamics as is, but his product leaders will dotted line report to Qi Lu, his marketing leader will dotted line report to Tami Reller and his sales leader will dotted line report to the COO group.
    There are two important implications in this short paragraph.
    1. Strategic role of Dynamics. The dotted line relationships with sales and marketing are a recognition of the connections that Dynamics makes outside the customer's IT organization. In the enterprise, apart from Dynamics, Microsoft sells at a fairly low level--at best to the CIO. The Dynamics group is the one part of Microsoft that gets into conversations with other members of the C-suite and with lines of business leaders. As the consumerization of IT continues, it is essential that Microsoft break out of the IT organization. With its enterprise applications, Dynamics represents and excellent opportunity for it to do so. 
    2. Dynamics representing Microsoft's ISV partners. The dotted line relationship with Qi Lu, the newly announced head of the Applications and Services Engineering Group, points to the opportunity to leverage other parts of Microsoft's portfolio in its Dynamics line of business applications. These include products such as Bing, Lync, Office 365, Sharepoint, Exchange, and Yammer, among others. All of these products are enterprise-focused and should be tightly integrated with the Dynamics applications. If Microsoft expects its ISV partners to make use of these technologies, Microsoft needs to set an example by doing so within its own Dynamics apps. The tighter relationship between Dynamics and Qi Lu's business unit indicates the strategic role that Dynamics plays as showcase for the use of the broader portfolio of Microsoft products. 
    Finally some have asked, do these dotted line relationships indicate a lack of confidence in the Dynamics group? The answer is no. If there were a lack of confidence, a corporate reorganization would be the perfect time to replace the leadership. Clearly, that didn't happen. These changes, rather, point to an elevated role for Dynamics within Microsoft.

    Related posts

    Microsoft Dynamics Move Up-Market: What’s Missing?
    Four Needs Pushing Microsoft Dynamics into Large Enterprises

    Tuesday, June 25, 2013

    Oracle and The Great Detente and Oracle today announced a "new strategic partnership." For their mutual customers, the announcement represents a welcome thawing of relations between the two companies. But it remains to be seen whether it represents a strategic change of direction for

    Not a Radical Departure for

    The press release is quite short, just five paragraphs, outlining five points of partnership:
    • SFDC will standardize on Oracle Linux.
    • SFDC will deploy Oracle's Exadata engineered systems in its data centers. 
    • SFDC will deploy the Oracle Database and Java Middleware Platform as part of its cloud infrastructure.
    • Oracle will integrate's cloud apps with Oracle’s Fusion HCM and Financial Cloud.
    • will also implement Oracle’s Fusion HCM and Financial cloud apps for its own internal use.
    So, what exactly in this announcement represents a fundamental change in direction for Salesforce?
    • SFDC's infrastructure is already based on Linux, so standardizing on Oracle Linux is a minor change.
    • SFDC's applications already make use of Oracle's database as the lower-level physical data store.
    • The press release provides no detail on how SFDC will make use of Oracle's Exadata boxes. If they are merely used to replace commodity storage devices, there would not be any change to the basic architectural design of SFDC's infrastructure.
    • Oracle's integration of Fusion HCM and financial system with SFDC is merely an application integration initiative. 
    • SFDC's implementation of Oracle Fusion HCM and financial applications is a routine "win" announcement. 
    The second bullet could potentially be the most radical departure for SFDC. Oracle's new database release, 12c, could provide the capability for SFDC to run multiple pluggable databases (one for each customer) within a single container database. This would represent a fundamental shift for SFDC away from its single multi-tenant database architecture in favor of Oracle's pluggable database approach.

    Nevertheless, the fact that there is no mention of 12c or pluggable databases in the press release makes me seriously doubt that SFDC intends to fundamentally change its platform architecture. I have a question pending with SFDC on this point and will update this post if and when more information becomes available. [Update: SFDC is not willing to provide details beyond what was in the original announcement and subsequent conference call with Ellison and Benioff.]

    Thawing of Relations

    What I do find significant in this announcement is that Oracle and have apparently buried the hatchet, at least for now. For their mutual customers, now and in the future, this is good news.

    Customers are not well-served by vendors sniping at each other, and the verbal tiffs between Benioff and Ellison over the past few years, frankly, have become annoying. Hundreds of customers have interfaced Oracle Applications with's cloud apps. But until now they have done so without the explicit support of Oracle. Customers will be pleased if the two companies can cooperate in providing standard integration. Hopefully, both parties will start acting like adults and doing what is in their joint customers' best interest.

    Workday Is Odd Man Out

    If there is a competitive target in this announcement, it has to be Workday. SFDC will implement Oracle’s HCM and will integrate its Sales Cloud with Oracle’s HCM and also with its Fusion Financials product. This puts Workday in an awkward spot in that Workday leverages for its platform-as-a-service capabilities. It will be interesting to see how Workday reacts to this d├ętente between Oracle and

    While the use of Oracle Fusion within SFDC doesn’t mean much to SFDC customers, it does give bragging rights to Larry Ellison against Workday. Interestingly, NetSuite's CEO Zach Nelson was recently taking pot-shots on stage at Workday during NetSuite's Suiteworld conference. At the time, I took it as a sign of Workday's competition with NetSuite in financial applications. Now I see it as part of a wider competitive alignment. Both Zach Nelson and Marc Benioff are Oracle alumni and both have close ties to Larry Ellison. The three now seem to be joining in solidarity against Workday and validating that Workday is a threat to all three.

    Regardless of the competitive posturing by these major enterprise technology providers, the Oracle/Salesforce detente is welcome news for customers.

    Update, 11:30 a.m. PDT. Dennis Howlett spoke with Aneel Bhusri, co-CEO Workday, who says that he doesn't anticipate any impact from the Oracle/SFDC announcement. 

    Update, 12:15 a.m. replied to my inquiry indicating they are unable to provide additional details at this time on the announcement.  

    Related Posts

    Oracle Fusion Runs Into Oracle Apps Unlimited
    Oracle's Behavior Undercuts Its Own Cloud Accomplishments

    Wednesday, June 05, 2013

    Plex Software and Its Mandate for Growth

    As the first cloud-only manufacturing ERP system, Plex Systems has a wide footprint of functionality, going beyond what is offered by newer cloud vendors.

    Nevertheless, after more than a decade of development, Plex has fewer than 1000 customers and its presence is limited mostly to smaller manufacturing companies in a few sub-sectors.

    As evidence, there were about 700 attendees at last year's PowerPlex conference. This year's PowerPlex, which I attended this week in Columbus, Ohio, saw about 750 Plex users in attendance.  Granted, overall, these are highly satisfied and enthusiastic customers. There just needs to be more of them.

    On the one hand, Plex claims a compound annual growth rate of nearly 30% over the past three years--an impressive number. But as the first fully multi-tenant manufacturing cloud vendor, Plex could have, and should have, been growing at a faster pace. Now, there are several other cloud vendors taking aim at Plex's market, such as NetSuite, Acumatica, Rootstock, and Kenandy

    Plex must grow more aggressively, for two reasons. First, the company was acquired last year by two private equity firms. Private equity is not known for patience. Second, as CEO Jason Blessing pointed out in his keynote, growth protects the investments of existing Plex customers. Software companies that do not grow do not have the resources for continued innovation. Eventually, they only provide enough support to keep current customers--at best. They become, in effect, "zombie vendors," to use Blessing's term.

    So, what does Plex need to do to grow at a more substantial pace in the coming years? I see six mandates. Some of these are fully embraced by Plex, while others, in my view, could use more emphasis.

    1. Get Noticed

    If some cloud vendors need to tone down their marketing hype, Plex needs to kick it up a notch. Plex was not only the first truly multi-tenant cloud manufacturing systems, it was also one of the first cloud providers period. Yet still the majority of manufacturing systems buyers have not heard of Plex. Reflecting Plex's home turf in Michigan, discussions with Plex insiders about this often includes the phrase, "midwestern values"--in other words, not blowing one's own horn. However admirable this humility may be on a personal basis, it is not useful from a business perspective.

    Hopefully, this is about to change with the hiring of Heidi Melin as Chief Marketing Officer. Melin worked with CEO Blessing at Taleo, and more recently she was CMO at Eloqua, which was acquired by Oracle. In my one-on-one interview, Blessing was high on Melin's arrival, and indicated that she would be especially focused on digital marketing to reach the many thousands of companies in Plex's target market.

    2. Put More Feet on the Street

    Blessing also indicated that he intends to beef up Plex's sales efforts, which to date have been concentrated largely in the Great Lakes region. This has left many sales opportunities poorly supported in other US geographies, such as the southern states (home to many automotive suppliers), Southern California (home to many aerospace suppliers), and other parts of the country that are home to many food and beverage companies. Increased sales presence in international markets is also needed.

    This is a step long overdue. When my firm Strativa short lists Plex in ERP selection deals, Plex is often flying in resources from across the country, which does not sit well with most prospects. Opening regional sales offices, like Plex has now done in Southern California, will help put more feet on the streets of prospects.

    3. Move Up-Market

    Historically, Plex's system architecture is oriented toward single-plant operations. There is some logic to this approach. As Jim Shepherd, VP of Strategy, points out, most of the information needed by a user is local to the plant he or she is working in. However, even small manufacturers often have needs that include multiple plants, cross-plant dependencies, and central shared services. Plex does have some multi-billion dollar customers, but these are primarily companies with collections of plants that are relatively independent of one another.

    In response, Plex is building out its cross-site and multi-site capabilities while keeping its primary orientation around the single plant. In my view, this will be a key requirement in Plex moving up-market and serving larger organizations.

    4. Build Out the International Footprint

    The bulk of Plex's sales are to US companies, but if Plex is to grow more aggressively it will need to better support the international operations of these companies. It will also need to sell directly to companies outside of the US.

    In his keynote, Shepherd pointed to the new ability for Plex to print reports on A4-size paper, commonly used in parts of the world outside North America. The fact that Plex is just now getting around to formatting reports on A4-sized paper shows just how US-centric Plex has been. To be fair, Plex does support multiple currencies and has support some international tax requirements, such as in Brazil, India and China, although some of this is done through partners. Nevertheless, Plex has much it could do to improve its appeal to multinational businesses. In this day and age, even small companies--like those Plex targets today--have international operations. Building out its international footprint is another prerequisite for Plex to achieve more rapid growth.

    5. Venture Outside of Traditional Subsectors

    Plex sees its current customer base primarly as three manufacturing subsectors today: motor vehicle suppliers, aerospace and defense, and food and beverage. Blessing indicates that by Plex's calculations, these three sub-sectors account for about 25-30% of the manufacturing ERP market. Surprisingly, however, Plex currently has no plans to expand outside of these sub-sectors. Blessing believes that simply by increasing Plex's sales execution in its current markets it can continue its compound annual growth rate of nearly 30% for the next several years.

    Count me skeptical. First, as indicated above, Plex no longer has exclusive claim to the cloud manufacturing ERP market. Plex is going to have to fight a lot harder than it has in the past for new customers. Second, why is 30% growth the benchmark? I understand that there are risks in more aggressive growth. But aiming higher might be needed in order to meet the 30% goal.

    In my view, Plex is not far off from being able to address the needs of manufacturers that are adjacent to its existing markets. These would include industrial electronics, medical devices, and industrial equipment. Plex already has some customers in these sub-sectors, so it's not like the company is starting from scratch. Hopefully Plex will formally target these industries, sooner rather than later.

    6. Target the Customers You Want Not Just Those You Have

    Over the past 10+ years , Plex has let customer requests drive its product roadmap. In fact, much of Plex's development has been funded directly by customers or groups of customers who desired certain new features. This worked well to minimize Plex's up-front costs of new development and also led to high levels of customer satisfaction. However, it had one major drawback: if you only have customer-driven development, everything you build will by definition only be of interest to the type of customers you have today. In addition, a single customer or group of customers are not able to fund major new development that are more strategic in nature.

    Here Plex is on the right track. Recognizing this need, Plex is now allocating product development funds for strategic initiatives, including a revamp of its user interface, cross-browser access, business intelligence and reporting capabilities (Inteliplex), as well as other major initiatives. In conversations with customers at PowerPlex they expressed these as welcome developments, although they have, apparently, diverted Plex resources from some of the customer-requested enhancements they also wanted.

    The Way Forward

    There's plenty that I admire about Plex: its zero-upgrades approach, its broad functionality, and the fact that it proves manufacturing companies have been ready for cloud computing for many years, contrary to the claims of on-premise ERP providers. Most of all, Plex allows me to roam around its user conference and speak informally with customers. Nearly without exception, everything I hear is positive. Not a single customer has told me they made the wrong choice with Plex, although with any ERP implementation there are always bumps in the road. 

    But none of this guarantees that Plex will thrive in the future. Like proverbial sharks, software vendors must continue to move forward, lest they die. The management team at Plex has some new blood, including the CEO, and a new perspective. They understand the opportunities ahead, but will they fully rise to the challenges? We'll be watching.

    Note: Plex Software covered some of my travel expenses to their annual user conference. 

    Related Posts

    The Simplicity and Agility of Zero-Upgrades in Cloud ERP 
    Plex Online: Pure SaaS for Manufacturing

    Sunday, June 02, 2013

    Moving Outside the Box of Enterprise IT

    Information technology goes far beyond the realm of enterprise IT.  New technologies, such as big data, mobile applications, and cloud computing hold promise in addressing many of the world's great problems, while at the same time offering strategic advantage for businesses. Corporate IT leaders, therefore, need to reach outside their narrow focus on ongoing support to incorporate these new technologies to deliver business value. 

    This was my main takeaway from the Future in Review 2013 (FiRe2013) conference down the road last month in Laguna Beach, CA. FiRe bills itself as "the leading global conference on the intersection of technology and the economy." It is an annual conference of the Strategic News Service, which publishes research under this broad theme.  

    Beyond Enterprise IT

    Although FiRe is focused on technology, it is largely outside the boundaries of what is typically considered "enterprise IT," or even "consumer IT." It even goes beyond "line of business IT." It is about future-oriented issues involving the impact of technology on economic and societal interests. Under this year's theme, Digitizing the Planet, the agenda covered a wide range of focus channels, including computing and communications, economics and finance, education, energy, healthcare, environment, global initiatives, and pure science. Presenters included big names, such as Vint Cert, the "father of the Internet," who is now Chief Evangelist at Google, as well as a host of visionary thinkers from a variety of disciplines in the private and public sectors.

    For me, it was a chance to get outside my usual track of user and vendor conferences in the enterprise software market. It was also a great opportunity during the breaks to speak one-on-one with professionals outside of my usual circle, for example, David Engle, Superintendent of the Port Townsend public school district and a panelist in the education channel, Nick Vitalari, author of the book, The Elastic Enterprise, and Greg Ness, who moderated a panel on hybrid cloud.

    Here are some of the big ideas that caught my attention and what they mean for enterprise IT.  
    1. Move from Data Analysis to Data Visualization. One eye-opener was the session on data visualization with Chris Johnson, University of Utah, and Bob Bishop, Founder of the International Centre for Earth Simulation (ICES) Foundation. The aim of ICES is to integrate all the sciences that pertain to planet Earth. The panelists showed one such visualization: a huge simulation of earth's thermohaline conveyor belt: a single worldwide ocean current that has a large impact on Earth's climate. Another showed the earth's magnetosphere.

      How does this apply to enterprise IT? Organizations are swimming in data, both internal and externally sourced data, both structured and unstructured. To go from analyzing the data, to discovery of useful information, to decision support requires some sort of visualization. If data analysis is on your IT strategic roadmap, data visualization should be there also.

    2. Social Collaboration around Data.  There was more on the big data theme. Stanford and NASA engineers have come together to form Intelesense Technologies, with its website. The site provides an interactive 3-D globe, dubbed InteleView, with over two million layers of geospatial data (which users can supplement with their own data) along with forums, blogs, shared calendars, video conferencing, and other tools to facilitate group collaboration worldwide around data. To provide a hands-on experience, Intelesense gave trial system access to all FiRe attendees. 

      How does this apply to enterprise IT? It's not enough for just one person to visualize large data sets. We also need tools that promote collaboration around data. Collaborators may include individuals within and outside the enterprise, and they often include participants worldwide. Many so-called "social business" tools today only provide the mechanism for collaboration (e.g. threaded discussion) but do not include the content (i.e. data) for collaboration. The real need is to combine big data with social collaboration.  The website is an excellent case-study in what this looks like.
    3. Business Opportunities and Threats in Big Data. John Hagel and Eric Openshaw from Deloitte posed the question: will massive increases in data lead to increased fragmentation of industries, or will it lead to consolidation of businesses in the hands of a few who can support these massive data platforms? Their answer: it depends on the industry and the business function. Fragmentation will occur mostly in product innovation and commercialization businesses, such as digital media, media businesses, and even in physical products that can be disrupted by 3D printing. On the other hand, consolidation may take place with infrastructure providers, such as digital platform providers. With big oil, the question was always, who owns the resource? But with big data, the question is, who can create the value from it?

      How does this apply to enterprise IT?
      In the view of Hagel and Openshaw, most large companies are vulnerable, because they are largely focused on their products, which is the part of their business that is threatened by fragmentation.  CIOs, need to look beyond systems to support their organizations' current business to capabilities and business models that can allow their organizations to compete in the era of big data platforms. It may not even be your data, but if you can create value from it, your organization will succeed in the marketplace.
    4. Protecting IP More Important Now than Ever. Although so much of FiRe was visionary, there was a significant focus on security, with four tracks on "Achieving Zero Loss of Crown-Jewel Intellectual Property." Vint Cerf, now Chief Evangelist at Google, used his time to talk about network security. Cerf and other presenters offered a number of potential solutions. Some are technical, such as increased use of two-factor authentication and software security measures integrated with hardware at the chip level. Others go beyond technology, such as the use of economic sanctions and import tariffs against companies that are found to have stolen intellectual property.

      What does this mean for enterprise IT?
        As the world becomes increasingly connected and much of the organization's IP is digitized, the opportunities and rewards for IP theft increase. As CIOs facilitate new technology-enabled business models, they must also increase their focus on security.
    5. Simplification of IT Environments Key to Big Data Challenges. The conference was not without an enterprise IT focus. Mark Hurd, Oracle's co-President and a regular speaker at FiRe, was on hand for a wide-ranging conversation. He pointed out that twice as much data will be created worldwide this year than has been created in the entire history of the planet. Much of this is machine- or sensor-generated data, such as data coming from sensors positioned on deep sea drilling rigs. Drilling companies collect all of this data--much of which is uninteresting--so that they have access to that one piece of information that turns out to be critical when there is a failure deep beneath the sea floor. Storing, managing, and analyzing that much data is a challenge, and technologies such as virtualization and data compression are key to success. Yet many businesses are shackled by legacy systems and infrastructure that do not scale to meet the demand. Simplification of the IT environment, including use of public and private clouds, is essential to meet these challenges.

      What does this mean for enterprise IT? CIOs have two responsbilities that are somewhat in conflict. They must maintain current systems while investing for the future. With limited IT budgets, IT organizations must simplify and optimize their existing systems and infrastructure so that they have the bandwidth to make these strategic investments.

    A Challenge to Enterprise IT Vendors

    The expanding role of technology is not only a challenge for enterprise IT leaders, it is also a challenge for IT vendors. Nearly every major enterprise IT vendor has its visionary initiatives. SAP has HANA, Oracle has its Exa-boxes, IBM has Watson and its Smarter Planet initiatives, and so forth. At the same time, these vendors have enormous revenues in legacy technologies: SAP in its Business Suite, Oracle in its collection of acquired software and hardware technologies, IBM in its legacy hardware and systems integration business lines, and so forth. If IT organizations are challenged to rise above their legacy system support requirements, so too are IT product and services providers. Can the major IT vendors meet the challenge, or will a new generation of big data and cloud providers take their place?

    One note on the conference format itself. In contrast to most technology conferences, which feature highly scripted keynotes and breakout sessions with single speakers, the format of at FiRe is nearly all panel discussions or one-on-one interviews. This format promotes a much more conversational and spontaneous style. The moderators or interviewers take a minimalist approach, guiding the discussion where needed but not becoming a center of attention themselves. Mark Anderson, the FiRe conference chair, and Ed Butler from the BBC hosted a number of sessions in this style. Other conferences could learn from FiRe's format.

    The registration page for the FiRe 2014 conference, May 20-23, 2014 in Laguna Beach, CA, is now open.

    Monday, May 20, 2013

    NetSuite Manufacturing Moves on Down the Highway

    NetSuite held its annual user conference, Suiteworld, last week, and in his day one keynote, CEO Zach Nelson highlighted "NetSuite for Manufacturing."

    I wrote about NetSuite's manufacturing functionality last year in my post, NetSuite Manufacturing: Right Direction, Long Road Ahead. Returning to this subject one year later, it is encouraging to see the progress that NetSuite has made. At the same time, there will be twists and turns that NetSuite will face in continuing down this highway.

    If NetSuite is going to continue its growth, reported at 28% last year in its core business, it really has no choice but to pursue manufacturing customers. Manufacturers are the largest market for ERP systems and therefore an attractive target for NetSuite's development efforts. Although manufacturers have been slower to embrace cloud computing than many other sectors have, the situation is rapidly changing. In our ERP vendor selection services at Strativa, we find manufacturing companies increasingly open to cloud ERP. Sometimes, in fact, they only want to look at cloud solutions. In other words, NetSuite is at the right place at the right time.

    Balancing New Functionality with Need for Simplicity

    To more fully address the needs of manufacturing, NetSuite continues to build out its core functionality, with basic must-have features such as available to promise (ATP) calculations, routings, production orders, and standard costing. In some of the breakout sessions, there were indications of that NetSuite is also exploring functionality that goes well beyond the basics: for example, supply chain management (SCM) and demand-driven MRP (DDMRP).

    This leads to the first twist and turn that NetSuite will need to navigate: filling out gaps in manufacturing functionality while not over-engineering the system. Oracle and SAP are famous for having manufacturing systems that are feature-rich, requiring significant time and effort from new customers to decide which features to configure and to implement them. Part of the attraction of NetSuite is its relative simplicity and ease of implementation. If NetSuite wants to remain an attractive option for the likes of small and midsize manufacturers, or small divisions of large companies, it will be wise to pick and choose where to build out the the sophistication of the product.

    For example, the availability of multi-books accounting (which I discuss briefly in the video at the top of this post) is a good move, as it has widespread applicability to both small and large companies in the manufacturing industries as well as other sectors. But does DDMRP fall into the same category? Moreover, how much SCM functionality do prospects expect from NetSuite, and where does it make sense to partner with best-of-breed specialists, who can better bridge a variety of SCM data sources?

    Netsuite's recent success with manufacturers such as Qualcomm, Memjet (discussed later in this post), and others give it real-world customers to validate its product roadmap. It will do well to prioritize new development efforts to the areas where those customers deem most needed. NetSuite may choose, ultimately, to fully move up-market, to become the manufacturing cloud equivalent of SAP or Oracle. But if it does so, there are already a number of other cloud ERP providers, such as Plex, Rootstock, Kenandy, Acumatica, and Keyed-In Solutions, that will be ready to take NetSuite's place serving small and midsize manufacturers.

    NetSuite's PLM/PDM Strategy Needs Openness

    NetSuite also announced a new alliance with Autodesk to integrate its PLM 360 offering for product lifecycle management with NetSuite's ERP. This is in addition to NetSuite's existing partnership with Arena Solutions.

    By way of background, PLM systems manage the entire life-cycle of product development, from ideation and requirements gathering, through design and development, to release to manufacturing, service, engineering change, and retirement. PLM systems take an engineering view of the product and are generally under the domain of the client's product engineering function. PLM systems generally include product data management (PDM) systems as a subset, to manage all of the product data, such as drawings, specifications, and documentation, which form the definitions of the company's products.

    Over the past 20+ years, the integration of PLM and PDM systems with ERP has been a difficult subject. In organizations where engineering and manufacturing work well together, basic roles and responsibilities can be defined and proper integration of data can be accomplished. In organizations where such cross-functional processes are weak, PLM/PDM and ERP often form separate silos.

    Autodesk's PLM 360 shows very well, and the story about its cloud deployment matches well with NetSuite. However, it is my observation that the majority of manufacturers would do well simply to establish simple integration between their engineering bills of material (within their PLM/PDM systems) and their manufacturing bills of material (within their ERP systems). Making engineering documentation within the PLM/PDM system available to manufacturing ERP users is also highly desired. Furthermore, there are few engineering organizations that have not already standardized on a PLM/PDM system (e.g PTC's Windchill, Solidworks, and others), and they will seldom be willing to migrate to Autodesk just because the company is implementing NetSuite's ERP.

    This is another turn of the highway that NetSuite must navigate: will it offer standard integration to a variety of PLM/PDM systems, or will its answer to engineering integration be, "Go with Autodesk or Arena?" I do not believe that an Autodesk- or Arena-preferred, strategy is the best.

    Case Studies Encouraging

    To validate its progress in the manufacturing sector, NetSuite reported on several case studies.
    • At the large end of the spectrum there was Qualcomm, the $19 billion manufacturer of semiconductors and other communications products. Although Qualcomm has Oracle E-Business Suite running throughout much of its operations worldwide, in 2011 CIO Norm Fjeldheim chose NetSuite for use in smaller divisions, based on the need for implementation speed and agility. As part of that strategy, Qualcomm  has now gone live with NetSuite in a newly launched division in Mexico. This is a nice "existence proof" for a two-tier ERP strategy in a very large company.
    • At the smaller end of the spectrum there was Memjet, a manufacturer if high-speed color printer engines. Martin Hambalek, the IT director at Memjet, did a short on-stage interview during Nelson's day one keynote. Although the company has just 350 employees, it has engineering and manufacturing operations in five countries. Unlike Qualcomm, Memjet runs NetSuite as its only ERP system worldwide, showing NetSuite's capabilities for multinational businesses. Notably, Memjet is also a customer of Autodesk for its PLM 360 system, mentioned earlier. In my one-on-one interview with Hambalek later during the conference, I learned that he is the only full-time IT employee at Memjet: evidence that a full or largely cloud-based IT infrastructure requires many fewer IT resources to maintain.
    Customer stories are the best way to communicate success, and these two NetSuite customers substantiate NetSuite's progress.

    Rethinking the Services and Support Strategy

    As much as ERP functionality is important to manufacturers, there is another element of success that is even more important: the quality of a vendor's services and support. It struck me during the keynotes that, apart from an announcement of Capgemini as a new partner, there were no announcements about NetSuite's professional services. 

    More ERP implementations fail due to problems with implementation services than because of gaps in functionality. Functional gaps can be identified during the selection process: but problems with the vendor's implementation services are more difficult to discern before the deal is signed. Furthermore, functional gaps can often be remedied through procedural workarounds. But once the implementation is underway, failures in implementation services are difficult to remedy. Sometimes, such failures wind up in litigation.

    In this regard, NetSuite's rapid growth has a downside: it stretches and strains the ability of NetSuite's professional services group to spend adequate time and attention on its customers' implementation success. In advising prospective ERP buyers, I have much more concern about what their implementation experience will be than I do about any potential gaps in NetSuite functionality.

    One solution is to build a strong partner channel of VARs, resellers, and implementation service providers to complement or even take over responsibility for post-sales service and support.

    During the analyst press conference, I asked Zach Nelson about this point. NetSuite is building its partner channel, but how does it decide what work should go to its implementation partners and what part should be retained for NetSuite's own professional services group?  Nelson's answer reflected a traditional view, that whoever brings the sales lead to NetSuite should get the services. In other words, if a lead comes through NetSuite's own sales team, NetSuite should get the services work. If the lead comes through a partner, the partner should get the services.

    As an advisor to prospective buyers, my own view is that NetSuite should rethink this strategy. The party that happens to find the prospect may not be the best party to deliver the services. In fact, NetSuite may be better served by passing off implementation services to local partners that are willing to spend more time with the customer on-site than NetSuite's own professional services group may be able to provide.

    At the end of his answer, Nelson indicated that he would actually prefer that NetSuite not be in the professional services business. If so, this is good news. Let NetSuite focus on developing and delivering cloud ERP, and let a well-developed partner channel compete to provide hands-on implementation services. What professional services NetSuite does provide would be better focused on providing support to those partners. 

    Just before leaving the conference, I gave Dennis Howlett my initial thoughts in this video interview on NetSuite Manufacturing and multi-book accounting.

    Disclosure: NetSuite paid my travel expenses to attend its user conference. They also gave me a swag bag.

    Update: in an email exchange, Roman Bukary, NetSuite's head of manufacturing and distribution industries, comments on NetSuite's PLM strategy:
    The fact that today we have a partnership with Arena and Autodesk is not a matter of “just” these two, it’s a matter of those vendors who have a smart, complementary cloud strategy and our own bandwidth to recruit and enable partners. For my $.02, we have an open strategy with the goal to change the kind of solution modern manufacturing can leverage today

    Related Posts

    NetSuite Manufacturing: Right Direction, Long Road Ahead.

    Thursday, March 28, 2013

    Does SaaS Save Money?

    According to a soon-to-be-published survey by Computer Economics, IT decision-makers appreciate the benefits of SaaS, such as speed, agility, and scalability. But there is one benefit that they do not rate highly. They do not see that SaaS saves money.

    Please read to the end of this post to see how I plan to quantify this issue with some hard data.

    One Client’s Impression

    This finding was reinforced in my mind last week, when I reconnected with a past client of my consulting firm, Strativa. We had helped this high tech manufacturer three years ago with a new CRM system selection, and the company had chosen Now the CIO wanted to pick my brain about options for upgrading other parts of his applications portfolio.

    In the course of the conversation, the CIO made an interesting observation. “Frank, we think we made the right choice with Salesforce, and we believe cloud systems are the way to go. But I’ve got to tell you, they aren’t cheap.”

    I indicated that I had been thinking about this subject recently and asked him to tell me more. “Well, when you count the per-user fees, plus the platform costs, plus the partner apps that you want to implement, it can add up to a lot of money year after year,” he explained. “And, of course, you still have the up-front implementation consulting fees.”

    Changing the Subject

    Later in the day, I posted a couple of tweets about this conversation, and the reaction from some of my followers was interesting. “The real benefits of SaaS are in flexibility and agility,” replied one follower. “You shouldn’t be looking at TCO,” replied another.

    I'm always amused when analysts and consultants want to tell customers what questions they should be asking or not asking. As if some questions are off limits.

    Now, as a proponent of cloud computing, I’ll put myself right up there with the best of them. However, I would like to know: how does the total cost of SaaS compare to on-premises systems? Moreover, if SaaS is more expensive, isn’t that useful information for IT decision makers?  Of course it is. If a customer is going to make a technology decision, the customer should have all the information needed. Certainly, cost is one of the factors he or she should be taking into account.

    Four Theories

    So, let's consider: why might a customer think that SaaS doesn't save them money? Off the top of my head, there are at least four possibilities.
    • Theory 1: SaaS does save money, but customers don’t realize it. In other words, perhaps customers do not fully appreciate the cost of staffing and supporting on-premises systems, such as the cost of implementing future upgrades. These are costs that are eliminated or greatly reduced with SaaS. But since customers do not fully recognize those costs, they do not count those savings. Or, because of the cost, they might be avoiding upgrades of on-premises systems and not recognizing the price their organization is paying by not staying current.
    • Theory 2: SaaS does save money, but you only realize those savings when you completely eliminate your on-premises systems. If you still have most of your systems on-premises, moving just one of them to the cloud doesn’t eliminate your data center or data center staffing. So, you are not able to realize the cost savings from eliminating the data center. 
    • Theory 3: SaaS does save money, but vendors don’t pass along those savings to customers. In other words, SaaS applications are cheaper to for vendors to develop, deploy, and maintain, but SaaS providers are just matching the prices of on-premises vendors and enjoy extra profits.
    • Theory 4: SaaS is more expensive than on-premises systems, but it’s worth it. Perhaps SaaS does not save money, but the value of SaaS in terms of flexibility, agility, and scalability are so overwhelming that it’s worth it to customers to pay extra.
    Now, these four theories are not mutually exclusive. For example, SaaS may save money (Theory 1) and also allow vendors to appropriate some of the cost savings as extra profit (Theory 3). Or, a mix of on-premises and cloud systems do not save money (Theory 2), but its still worth it for customers in terms of agility (Theory 4). Furthermore, the answer may be different for different SaaS applications. For example, perhaps cloud CRM saves money, but cloud ERP doesn't.

    More Data Needed

    But, the general question is still unanswered. Generally, from the customer’s perspective, does SaaS save money?

    To answer this question, Computer Economics has launched another survey. As part of our annual IT spending and staffing survey, we are looking for organizations that have moved most or all of their applications portfolio to the cloud. In other words, we are looking for customers that have no internally supported data center, or at least, a minimal set of on-premises systems. We are asking these customers to take part in our regular annual survey, and we will compare the IT spending ratios of these select customers against our standard industry ratios for IT spending and staffing. We will also interview these customers to learn more about their experience with SaaS and the perceived value as well as challenges.

    Through this study, we hope to be able to answer three main questions. First, do companies that have gone largely to cloud computing spend less on IT than those that have not? Second, how does the mix of IT spending differ? Finally, where do customers see the business value of SaaS?

    We already have a handful of respondents and the initial data is quite interesting. But we need more. If you are a company that has implemented all or most of your business applications in the cloud, please apply to take our survey. As an incentive, survey participants will receive $2,500 of free research reports.

    Apply for the Computer Economics Survey >>

    Photo credit

    Wednesday, March 27, 2013

    Microsoft Dynamics Move Up-Market: What’s Missing?

    Microsoft Dynamics logo
    In December 2012, I wrote about four market forces that are pushing Microsoft Dynamics onto large enterprise turf. I also outlined several case studies in which Microsoft was having success with large multinational organizations. Now, more recently, I attended the Microsoft Dynamics annual user conference, Convergence, and had an opportunity to interview Microsoft executives and customers to see what further progress Microsoft was making in its move up-market.

    Bottom line: Microsoft has many of the necessary elements in place to continue its move into large enterprises, but it still needs to fill several major functional gaps in its product offerings.

    Continued Evidence of Success

    In recent years, Microsoft has had several implementations of its Dynamics AX and Dynamics CRM systems in large enterprises. These include Carrefour S.A, the world's second largest retailer, Nissan Motor Company, Shell Retail, and others.

    Now, at its Convergence conference, Microsoft highlighted two more large company success stories:
    • Dell Computer is the world's third largest PC manufacturer as well as a leading provider of a variety of IT products and services, with revenues of $57 billion. Dell is in process of consolidating its manufacturing ERP systems onto Microsoft Dynamics AX, with Oracle E-Business Suite continuing to run in headquarters and for certain corporate shared services.
    • Revlon is the well-known cosmetics company with worldwide revenues of nearly $1.5 billion. Revlon consolidated 21 ERP systems to a single instance of Microsoft Dynamics AX.
    Another key success factor for the large enterprise market is the ability to provide direct support. In this regard, Microsoft's reliance on its partner channel is often not sufficient for large companies. To address this need, Microsoft has been building up its Microsoft Services unit, which provides consulting and premier support not only for its Dynamics business applications but also for Microsoft's entire portfolio of offerings. The Microsoft Consulting Services (MCS) Dynamics unit has reportedly doubled its headcount over the past year, and it can provide everything from high level program and partner management services to hardware support in conjunction with its large OEM partners, such as IBM and HP. For large customers, Microsoft can even take responsibility for service levels of the deployed applications.

    Three Major Functional Gaps

    These case studies, along with Microsoft's direct services capabilities, indicate that Microsoft has had some success in the large enterprise market. But are these exceptions, or are Microsoft's offerings mature enough to routinely take business away from the Tier I ERP and CRM players?

    The answer is, not yet. Microsoft as an organization has the global presence and the resources to do so, but the Dynamics business applications at present lack functionality in three critical areas. Until these are filled, Microsoft will be limited in the number of deals where it can be short listed against Oracle and SAP.
    1. Human Capital Management (HCM). Microsoft Dynamics AX today does have some HCM functionality for core HR, talent management, benefits administration, and employee/manager self-service. In addition, it does provide payroll for US and Russia. However, those who have studied this functionality do not view Microsoft's HCM offerings as competitive with SAP, Oracle, Workday, or other first tier HCM providers. In the SMB market, Microsoft could get away with these deficiencies, as many prospects either do not include HCM in their acquisition plans or are satisfied to work with a Dynamics partner for any gaps in functionality. In the large enterprise space, however, this is often not an acceptable strategy. This is especially true when the Microsoft partners for HCM are only regional players.
    2. Customer Service. The Dynamics team prides itself on the success of its Dynamics CRM offering, built from scratch to be a serious competitor to, SAP, and Oracle. However, Dynamics CRM is not a full CRM offering. Its functionality is limited largely to sales force automation and now marketing automation (thanks to the 2012 acquisition of Marketing Pilot). Dynamics CRM lacks a full set of functionality for customer service and field service. So, when prospects are looking for a solution that gives them a 360-degree view of the customer—both new customers and existing customers, for both sales and for after-sales services—they quickly scratch Microsoft from their short lists. If they really want to go with Microsoft, they look to Microsoft partners to provide the needed functionality. Again, this approach may work for Microsoft's traditional SMB market—although even there, the lack of a customer service module is still a limitation.  But in large global enterprise deals with thousands of users, most prospects take a quick look at Microsoft and move on to more robust providers.
    3. Supply Chain Management (SCM). Microsoft Dynamics AX today only offers traditional material planning functionality, so-called MRP and MRP-II systems. There are no supply chain execution modules for warehouse management, transportation management, or logistics. Neither is there supply chain planning functionality for demand forecasting, sales and operations planning, constraint-based scheduling, supply chain optimization, or event management. Again, in the SMB market, many prospects are doing well if they can implement basic MRP, and those who need more are often happy to consider partner solutions. But in the large enterprise space, prospects often expect this functionality to be part of the core offering.
    Partner solutions work best when they address narrow industry needs—for example, law firm practice management from Lexis Nexus, or complex manufacturing functionality from Cincom. But for broad horizontal systems, such as HCM, customer service, and supply chain management, prospects expect the ERP or CRM system to be able to provide that functionality directly. Partner solutions at this point are simply a band aid.

    The good news is that Microsoft recognizes these deficiencies and intends to deal with them over the course of the next few years, although, for the most part, it is not giving out details publicly. The one area where Microsoft has indicated specific plans is in the supply chain area. Later this year, it intends to announce new capabilities for Dynamics AX for warehouse and transportation management, along with demand management. This is a good start. In the other two areas—HRMS and customer service—Microsoft executives only indicate that they realize these needs and intend to address them in future releases of Dynamics AX and Dynamics CRM.

    Priorities, Priorities

    The large company case studies illustrate that Microsoft Dynamics has an expanding presence in the large enterprise market. Nevertheless, it would be unusual to see Dynamics fully replace Oracle or SAP for customers in this space. That said, Microsoft still can be successful in the large enterprise space, if prospects see SAP and Oracle playing a restricted role: pushing them back into a corral to serve only their core financials and perhaps core HRMS needs. Outside of this corral, Microsoft Dynamics can then become the operational system platform for such organizations.

    If this is the case, the lack of HR functionality does not need to be an immediate impediment for further Microsoft progress up-market. Baring some major acquisition by Microsoft, it is unlikely that Microsoft Dynamics will have the richness of HCM functionality needed to displace SAP or Oracle in the HCM space. Any future Microsoft development in HCM will be more appealing to midsize organizations than to the large enterprise market.

    Likewise, Microsoft’s lack of supply chain functionality does not need to be a major impediment. Manufacturing, distribution, and retail prospects will still need to fill their SCM requirements with a third-party solution. Fortunately, there are good offerings from Microsoft partners for warehouse management and transportation management. Furthermore, even many SAP and Oracle customers look to best of breed solutions, such as E2Open and Kinaxis, for supply chain planning systems. So, the lack of Microsoft SCM offerings does not need to be a show stopper.

    The weakness of Microsoft’s customer service and field service features in the CRM product, however, is more problematic. When looking at CRM, most large enterprises want more than salesforce automation. Microsoft’s acquisition of Marketing Pilot for marketing automation fills one gap. A similar acquisition or internal development of after-sales service functionality is probably the most urgent need if Microsoft is to further succeed in the large enterprise market.

    A Fiercer Battle

    What could go wrong with Microsoft's up-market ambitions? First, SAP and Oracle are not going to let themselves be passively corralled within corporate headquarters. Both vendors have major programs to further develop and serve line of business system requirements: SAP with its acquisitions of SuccessFactors, Ariba, and its line of business cloud applications; Oracle with its Fusion Applications.

    Second, there are other providers that have the same up-market ambitions as Microsoft. For example, Infor, which is headed up by former Oracle co-President, Charles Phillips, fully intends to be a credible alternative to SAP and Oracle, and it already has a much broader footprint of applications than Microsoft has. Likewise, Workday from the very beginning took aim at the large enterprise market for HCM, financials, and operations management for services firms, and it is already a major thorn-in-the side for SAP and Oracle.

    Microsoft’s success in the large enterprise space, therefore, is not guaranteed. But its success so far is encouraging, and if it continues to fill out its functional footprint, it will become a strong contender.

    Postscript: Other analysts have good reporting on the Convergence conference. Esteban Kolsky's has a good post on Microsoft Dynamics CRM as well as a good video interview with Dennis Howlett.

    Update, April 4: I edited the paragraph on HCM, under the heading for "Three Major Functional Gaps." The original paragraph stated that Microsoft has no offering for HCM, which was not accurate. 

    Related Posts

    Four Needs Pushing Microsoft Dynamics into Large Enterprises