Friday, October 2, 2009

Microsoft’s Underlying Platform Parts for Enterprise Applications: Somewhat Explained

What About Visualization and User Interface (UI) Technologies?

However, what has somewhat intrigued me is Microsoft’s not-so-vocal touting and promoting of Windows Presentation Foundation (WPF), although it is an intrinsic part of the .NET Framework. In fact, to the best of my knowledge, the tool has not yet been used within the Dynamics set in earnest, although Lawson Software and Verticent would be the two independent software vendors (ISV) that I am aware of deploying it.

Both vendors tout WPF’s rich UIs that support virtually infinite customizations and business process compositions using Microsoft applications. Other Microsoft-centric ISVs either support only a limited number of specific and prescriptive business scenarios, or use a combination of technology products (for example, Microsoft Office Business Applications (OBAs), Visual Studio.NET, and proprietary interfaces and UI tools) to come up with similar custom scenarios. Again, Microsoft currently uses WPF very selectively in Dynamics UIs, for example, in the Dynamics AX graphical view of the organization structure of the business.

With its Smart Office offering, Lawson is not the first to leverage Microsoft Office to deliver not only manager and employee self-service, but much more as well. In fact, I could think of the joint SAP and Microsoft Duet product, Epicor Productivity Pyramid, QAD .NET UI, SYSPRO Office Integration [SOI]), IFS Business Analytics, and so on.

However, by leveraging WPF, Lawson embeds manager and employee self-service functionality more directly into Microsoft Outlook than Duet (which is more of an add-on launched from Outlook as an integrated pane) and most other vendors’ OBA solutions. Fore more details on Lawson Smart Office, see my earlier blog post on the vendor’s CUE 2008 conference and the Gartner Dataquest Insight report by Bob Anderson entitled “Lawson Raises the Bar With Differentiating ERP User Interface.”

Curiously, Lawson has deployed another non-mainstream Microsoft technology, Microsoft Office Groove. It is a peer-to-peer (P2P) collaboration platform, providing an outstanding base for collaboration (document exchange) scenarios that involve teams with sometimes disconnected participants. Microsoft claims that future product releases will improve the alignment for collaboration between Groove and SharePoint.

Lawson’s technology decision was likely owing to Groove’s concept of “shared workspaces” and Lawson’s view that individuals live in a “space” where they do most of their work. For example, a manager really “lives in” Microsoft Outlook, and should be able to do all his/her work from there. An accountant lives in Microsoft Excel and should be able to work from there. A mobile technician lives in the cell phone/personal data assistant (PDA) metaphor, where the Apple iPhone or Palm Treo similarity of UI can come in handy.

Some Other Vendors’ UI Approaches

Still, although WPF provides a visually appealing, familiar and intuitive UI, it comes with some trade-offs, specifically in memory utilization (being hardware intensive), the need to be hooked to the network, and a much greater dependency on Microsoft software. For instance, IFS doesn’t use WPF today for IFS Applications’ UI simply because of hardware needs: running WPF requires quite a hefty PC in terms of memory, and preferably the (possibly still unstable) Windows Vista platform.

We are talking here about IFS’ upcoming next-generation UI, which had for some time been called Aurora, but is now called IFS Enterprise Explorer (IEE). Namely, to prevent any confusion about Aurora being a separate product from IFS Applications, IFS has recently clarified its naming conventions.

Aurora is now a development project that will yield several enhancements to IFS Applications, all with a focus on ease-of-use and user productivity. The first deliverable as part of the Aurora project is IEE, the new graphical user interface (GUI) for IFS Applications. It is important to note that after IEE is released, the Aurora project will continue, yielding future enhancements.

In any case, IEE is interesting, to say the least, for leveraging Microsoft UI technology to create a look (albeit not yet the multi-touch touch screen, handgestures, etc. feel) of Apple iPhone (on top of Oracle database and Java-based application servers on the back end: some mix of technologies from adversaries, indeed). It is becoming quite obvious that the iPod and iPhone generation is our future workforce, who require well designed tools that they “love” to interact with. At the same time, they accept no excuses for “Why can’t I…?” questions, such as, for instance, “Why can’t I search in the enterprise application in the same way that I search on Google?”

At the end of the day, the design goal is to achieve more with fewer staff members, who thus have broader responsibilities, are able to handle the unexpected, collaborate with colleagues, and be more productive. In other words, the market drivers are the new and engaging design and user productivity. Consumer information technology (IT) and the web are leading the way, and are also becoming quite important for business applications.

To that end, prior to the IEE undertaking, IFS developed a pervasive enterprise search engine that attempts to think the way people think (e.g., “I need that fault report about the fire alarm not working”), and not the way enterprise systems think (i.e., “I want go into the preventive maintenance module where, in the service request folder, I will start the fault report screen, in which I shall then make a query on the description field containing any words followed by the words ‘fire alarm’ followed by any other words again”).

With built-in security (users can be limited in search authorizations as required), the enterprise search capability vouches for better results and value without additional costs. For more information, see the TEC’s article entitled “Why Enterprise Application Search Is Crucial to Your ERP System.”

Show Me, Don’t Sell Me

Show me, don’t sell me

Clearly, the easiest way for a vendor to allay my fears is to marry their feature list to my to-do list, and show me how to use their software for some of the things that I do every day.

And because I’m an inveterate YouTube addict (as are many of us office folk, I think) the best way to show me is to make some videos. Short ones that get right to the point so I can watch them while I’m eating lunch and still make my 1:00 meeting.

Looking around for videos such as these can be frustrating. Many vendor websites don’t have any. Some sites have them, but force you to register if you want to watch them (as if I needed any more email). Still other sites bury their videos so far down that you’re ten clicks away from finding out that they even exist. Ugh.

Hope is not lost

On the other hand, there are some vendors who, at least partly, get it, and the two that stood out in my quick survey were Microsoft and SAP.

The Microsoft Dynamics site’s introductory series of videos presents the Dynamics product line from the point of view of five “typical” department heads and their cartoon staff. It’s an overt marketing piece but if you hang on through the first minute of each manager’s spiel, you do get a few nuggets of valuable information.

For example, you can see how different Dynamics products integrate with other Microsoft Office products in the context of actual tasks, like tracking orders, generating reports, setting up marketing campaigns, etc. More to the point, you can see the products’ interfaces, which goes along way towards forming your gut feeling about each product.

Dig deeper into the Dynamics site and you’ll find demo videos for each of the products. These tend to be a strange mixture of Powerpoint-ish presentations and actual walkthroughs. But again, if you can hang on through the benefit statements (and you’re not put off by images of trains, factory floors, and guys in suits shaking hands and sharing laptops), you’ll get genuine task-oriented information that will give you some idea of what it’s like to work with the products.

SAP has a similarly extensive video library for its Business One product. While the videos aren’t quite as easy to find as Microsoft’s are, I thought they did a better job of connecting the dots. For each demo, SAP lists a few capabilities. When you click through to the video, you’ll notice that it explains each of those capabilities in terms of day-to-day tasks.

You still have to put up with “typical user” personas, dull stock photography, and a few marketing-y bullets, but the SAP videos are pretty well focused on the end user. Which is nice. It’s one thing to know that a piece of software “manages customer interactions, from contact data and history to calendaring and tasks.” It’s another thing to see how a real customer service call might be handled using that software. Especially if you work in customer service.

It’s not that hard

“Of course,” you’re saying, “Microsoft and SAP have the money to do that sort of thing.” But the truth is, it’s not that hard. Just search YouTube for any popular software—Photoshop, for example—and you’ll find a wealth of video tutorials produced by lone users in their spare time. Any corporate marketing department worth its budget should be able to do at least that much. And while Microsoft and SAP might have the resources to polish their videos to within an inch of their lives, for the average end user, good content trumps good presentation any day.

And it’s worth it

Making it easy to find nuts & bolts information about day-to-day software use has important benefits for both buyers and vendors of enterprise software.

If you’re in the market for new enterprise software, and you’re following TEC’s sage advice, one of the things you’ll do in the early stages of your selection project is to ask end users exactly what they do with your current software, and what might help them do it better.

Don’t worry, you don’t have to ask them individually. Instead, make sure that your selection team includes a few people who know, or can find the answers to those questions, and help you turn early-stage user feedback into criteria that you can weigh and analyze relative to all of your other requirements.

When they pass their feedback up the chain to the selection team, users who have seen various solutions in action can point to concrete examples of functionality they’d like to have in the new system. Instead of having to fully explain a complete workflow or a missing feature, they can say “I want something that works like that.” The end-user advocates on the selection team can translate “that” into more quantifiable feature lists and requirements.

As the beleaguered user, what’s in it for me is the feeling that my voice has been heard, which means I’ll be more likely to adopt the new system and less likely to turn into the office jerk.

What vendor wouldn’t want that kind of bottom-up support from their potential customers?

Now I know that end users rarely make the final decision, and that software selection projects tend to be fraught with political considerations that can pull a company in one direction or another. But smart companies—the ones that carefully consider the everyday needs of their employees before making a rational selection—are going to get more bang for their enterprise software buck.

Smart vendors are going to do everything they can to get those employees on their side.

Microsoft’s Underlying Platform Parts for Enterprise Applications: Somewhat Explained – Part 4 » booklet-p12-1-small-display1.png

booklet-p12-1-small-display1.png



Meridian Systems’ “Catch Up” Challenge in the Capital Infrastructure Industry

Meridian, which promotes its business as the Plan-Build-Operate (PBO) technology solutions leader for Project-Based Organizations (another PBO acronym, and thus the “PBO squared” mantra), offers an end-to-end solution for building owners, construction and engineering firms, and public agencies in three flavors. These offerings respectively cater to high-end (Tier One), mid-market, and small market organizations that manage capital building programs and facility assets.

Meridian’s overall focus is to improve customers’ revenue and profit growth by optimizing facilities, and by reducing construction and facility costs. To that end, Proliance, which is a full-fledged infrastructure lifecycle management (ILM) suite on a native Web services-based platform, is aimed at Tier One high-end PBOs with over US$ 1 billion in revenues and over 500 full-time employees. In this market segment, where the competition comes largely from SAP and Oracle, and with the deals valued from $750,000 to $10 million, Meridian typically wins owing to its OBA strategy, ready BIM and enterprise resource planning (ERP) integrations, Web services-based platform, distinct PBO product breadth, and well-attuned business analytics (BI) tools for PBOs.

Different Strokes for Different Folks

This part will focus more on Meridian’s forerunner Prolog product for smaller organizations, and on the vendor’s upcoming fourth quarter of 2008 (Q408) release of Prolog Connect for the mid-market.

Prolog was originally introduced in 1993 on a client/server platform, and is in use today by more than 4,000 companies that have revenues from $10 million to $500 million, and from 10 to 100 employees. With typical contract values of less than $150,000, the product grew rapidly across small organizations in the architecture, engineering & construction (A/E/C) sector because of Meridian’s micro-vertical expertise and rich understanding of this space, a usable and intuitive user interface (UI), and easy customization by business users (versus information technology [IT] staff).

Prolog is best suited for the “Build” phase of Meridian’s PBO solution set, and includes more than 400 packaged reports. It manages a wide breadth of activities including purchasing/bid management, budgets and cost management, contract and change management, correspondence management, design collaboration management, daily journal entries, jobsite tracking, and safety and quality programs. Usual-suspect competitors are Primavera [evaluate this product] and Autodesk Constructware (and occasionally e-Builder).

To modernize Prolog, and also appeal to larger mid-market companies, Meridian is releasing in late 2008 a new mid-market product, Prolog Connect, which provides Web services and service oriented architecture (SOA) layer atop the Prolog’s Project Portfolio Management (PPM) oriented product set. Featuring OBA strategy, secure collaboration with internal users and external supply chains, and flexible integration, Prolog Connect is targeted to companies in the $500 million to $1 billion revenue range or between 100 and 500 full-time employees (FTEs). When sold together, Prolog and Prolog Connect’s typical contract price is expected to be up to $750,000.

Current State of Affairs at Meridian

Lately, Meridian continues to win with its PBO value proposition for ILM with deals across a broad segment of public and private organizations. Keynote recent deals were:

* In the federal government sector – The United States General Services Administration (US GSA), two contracts valued at $2.5 million and $10 million respectively, beating or replacing Skire and Primavera;
* In the energy sector – Ontario Power Generation (OPG), contract valued at $2.2 million, beating or replacing Primavera;
* In the transportation sector – The Illinois Tollway, contract valued at $2.2 million, beating or replacing CapitalSoft;
* In the A/E/C sector, Ryan Companies and DMJM H&N/AECOM, contracts respectively valued at $2 million and $432,000, beating or replacing Oracle and own Prolog product; and
* In the real estate sector – CB Richard Ellis (CBRE), contract valued at $3.9 million, beating or replacing Bricsnet.

Other notable deals for Meridian include the State of Connecticut, Los Angeles World Airports, and the City of Seattle. Also of interest is that the company uses primarily a direct sales and support model for its upper-range Proliance product, and sells largely indirectly through system integrators (SIs) and value added resellers (VARs) in the small and mid-markets.

Meridian does not want to be in the ERP game, rather it wants to “connect in.” Within the Prolog and Prolog Connect solutions the vendor has pre-built hard connections into major project-based ERP leaders including Deltek Systems. Proliance was built on Web services and in Extensible Markup Language (XML) to allow for multiple points of integration with other applications, including ERP, financials/accounting, document management, etc. Proliance includes its own asset management modules, but can also be integrated with other (more powerful) enterprise asset management (EAM) systems as required.

It is also interesting to note that from the beginning both ProjectTalk (the on-demand version of Prolog) and Proliance OnDemand were multi-tenant offerings (i.e., keeping many customers in one environment rather than dedicating one environment per each customer). Meridian determined early on that this was a much more economical way to achieve the economies of scale needed to reach profitability with its offerings. As for customers, there are many using both systems. Haskell, Hathaway Dinwiddie and many others are on ProjectTalk, while ISTHA and CBRE use on Proliance OnDemand

Market Opportunity (and Challenges)

While Meridian is based out of the US, it works with a wide partner network, including customers that are turning into VARs, and partners that are looking to sell to emerging markets. A new Morgan Stanley report entitled “Emerging Markets Infrastructure: Just Getting Started” and published in April 2008 identifies that a sizable boom in infrastructure building is underway. This PBO surge spans the realms of power and water, property, ports, airports, and is across both the government/public and private sector.

Morgan Stanley forecasts US $21.7 trillion in infrastructure spending in emerging markets over the next decade (at least before the onset of the global credit crunch). The report identifies a surge in market listings of owners, operators and contractors to build infrastructure/assets in emerging markets, and states the number of listed infrastructure-related entities therein is up from 230 to 354 (54 percent increase) over the last five years. Morgan Stanley sees huge market capitalization — increasing from $146 billion to $1.1 trillion over this same 10 year period.

But what about the big enterprise software competitors who also play in these markets, and who indisputably own the IT departments’ mind share? Meridian’s President and Co-Founder John Bodrozic, quoted in Part 1 of this series, boldly says “bring it on” when queried about competitive consolidation, such as Oracle’s recent acquisition of Primavera.

“Oracle now has two products that do the same thing: Oracle Projects and Primavera, and the real question for installed users will be, “which one lives and which one dies” or will it continue its six year history of letting multiple products do the same thing (e.g., JD Edwards, PeopleSoft, etc.) but that have zero interoperability? Oracle’s published Frequently Asked Questions (FAQ) document on the acquisition states that Oracle plans on integrating Primavera PPM to “Oracle ERP,” but never states which one of Oracle’s ERP products. With nearly 50 acquisitions in a few years, one wonders how any buyers or sellers can make sense of which products can and should work well together in which instances.”

While I can understand the Meridian CEO’s confidence in his holistic ILM/BIM offering, I certainly would not dismiss the Primavera acquisition. At least, I agree with Vinnie Mirchandani’s liking of the deal and Oracle’s vertical industry-based acquisition strategy, backed by a coherent Oracle Fusion Middleware (OFM) strategy. Also, Brian Sommer has an impressive blog post on the PPM software space, besides what the deal might mean for Oracle and for Primavera customers.

In addition to its relatively small size and best-kept secret status when it comes to brand recognition, Meridian’s major challenge could be the fact that, since the ILM space is still new/evolving, there is a steep learning curve to explain it to customers. This is why the market is still often defaulting to more simplistic solutions that don’t do the job as well as Meridian does. It is akin to customers’ silly practice of switching light bulbs to save money on energy when you have no insulation in the walls.

There are so many inefficiencies and even adversarial relationships in the industry that unneeded costs and poor practices are built in and accepted. The challenge for Meridian is to build a greater understanding of the big picture impact of a complete PBO product line so that the market doesn’t continue to default to less complete ILM solutions like Primavera, Oracle, Skire, Tririga, etc.

Also, since the BIM/ILM connection is enabled via the partnership with Horizontal LLC, its competitors can emulate that over time (i.e., they can strike partnerships too). Thus, are there any other frontiers that Meridian could tackle next, in order to be ahead of the curve and continue to challenge the market with the “Catch us if you can” mantra? To that end, other potential areas are things like building new automated solutions for pulling data out of Meridian solutions to make the Leadership in Energy and Environmental Design (LEED) certifications turnkey, and exploring the latest construction delivery methods (e.g., private-public partnerships such as lease-leaseback for school districts).

Final Thoughts

In summary, Meridian offers a well-thought-out approach for small to large companies, with the right technical foundation for the future with a native SOA/Web services platform already in the market for the past five years. Additionally, it has integrated business functionality for managing ILM in the complete PBO spectrum. This scope includes the combination that both the market leaders and market pundits are missing: PPM, Scheduling, Facilities Management, and BI. If you are a project-based organization engaged in holistic capital infrastructure lifecycle management, this is one solution you should certainly consider.

Is One Country Good Enough to Handle Your Outsourcing Business

The concept of “portfolio” is very prominent in the finance world. “In finance, a portfolio is an appropriate mix of or collection of investments held by an institution or a private individual.” (Wikipedia) The practices of portfolio management now have many different models; some have become very complicated and need tremendous analysis. Simply speaking, the purpose of investing in different assets instead of betting all the money on one arises because different assets have different return potentials and different risk exposures. If you can build your portfolio appropriately, the diversity of your assets may help you to offset individual risks while maintaining an acceptable return.

Let’s take a look at this extremely simplified example: If you have the opportunity to buy a bond (low return but risk free) and a stock (high return but associated with high risk), what is your investment decision? The absolutely risk-averse people will only buy the bond and the opposite (extreme risk-takers) will only buy the stock. However, most of people are more likely to take some risks (but not too many or too high) while having higher return expectations than what the bond can yield. Thus, a mix of the two assets makes sense, and the proportion of each depends on what your return expectation is, or in other words, how much risk you are willing to take.

In the investment area, a “hedge” is a widely used method to manage risks. The main idea of hedge is to include two different types of assets in one portfolio. There should be a relationship between the two – when one tends to go down, the other goes up and vice versa. Hence, no matter what the economic and market situation are, the risk of your investment portfolio will be manageable.

Having provided the two examples above, I hope the idea of vendor portfolios becomes easier to understand. First of all, let’s take a look at risks that are associated with software outsourcing. Besides quality, delivery, support and such issues as are more related to individual vendors, there are also risks from the macro-environment:

* Physical risks: natural disasters (e.g. earthquakes, floods, and tornadoes) that will cease or temporarily impede your vendors’ development activities

* Regulatory risks: regulations (e.g. import/export tariffs, taxes, and employee compensation requirements) that will impact your vendors’ business costs and as a result, your cost

* Economic risks: such as exchange rates, employment levels, and vendor domestic market demands that will influence vendors’ pricing policies

* Societal and political risks: caused by political events, strikes, and culture shifts that will directly or indirectly change your vendors’ ability to provide service.

The vendor-specific risks (or, let’s call them micro risks) vary from vendor to vendor, but the macro risks are more related to the macro-environment in which vendors operate. In many cases, it is convenient to examine these risks at a country level.

If we agree that macro risks exist and that many of them vary from country to country, we may draw a conclusion that too much reliance on one single country is like investing all your money in one stock.

By building a portfolio that includes vendors from different countries, a company should be in a better position to manage macro risks. If there are complementary elements amongst those countries, you may expect a hedging situation. For example, when you discover that outsourcing to a certain country becomes unprofitable due to increased programmer wages, you may find that in another country wages are going down due to the surplus of programmers.

Microsoft’s Underlying Platform Parts for Enterprise Applications: Somewhat Explained

Shedding Some “Northern Star” Light on IEE

For IEE IFS uses Microsoft ClickOnce, which is a technology designed to perform web-based deployment of rich applications. Basically the authorized user clicks on a link and the application loads straight from the web server without needing to be installed and distributed via CDs (like traditional client/server applications). It works similar to the counterpart Java Web Start or Adobe Flash technologies.

ClickOnce can be used for all Microsoft .NET UI application styles including Windows Presentation Foundation (WPF), Windows Forms, and Silverlight. Basically, it is the deployment technology for Windows applications. IFS decided not to use WPF as the technology for building UI initially but plans to do so for its next major update due in a couple of years, when it also expects the availability of Microsoft .NET Framework 4.0, which the vendor believes will serve its needs well. It is also currently possible to mix WPF and Windows Forms in the same application, since the interoperability apparently works very well.

In any case, the current set of tools used by IFS has helped the ergonomic design and easy navigational technologies, such as: adaptable links panel, contextual breadcrumb navigation, and rich media. Adaptable links panel is a panel at the screen that shows all places “where a user can go from here.” For example, when viewing a customer order the Link panel will show links to customer information, price agreement, service level agreement (SLA) contract, and other “related” information (see figure below).


booklet-p12-1-small-display1.png


Contextual breadcrumb is a context-sensitive navigation menu that helps users visually navigate (and return to the start page, in association with the classic fable about Hansel and Gretel) to other application areas/pages that are “near” his/her current “path” in the application. You have a similar thing in Windows Vista for folder navigation. Similar to this is the Visual Recent Screens capability, which is a visual navigation history, showing all pages in the application visited since a user logged on (see figure below). It is also similar to the feature in Internet Explorer (IE) that shows all open tabs.


booklet-p10-3-recent-screens1.png



A good example of breadcrumb navigation could be found in the use of Webcom’s WebSource CPQ product catalog and configurator. The product is written in Java and AJAX UI technologies, just note that this navigation mode is technology-agnostic.

A Webcom’s user can be a seller of categories like: Software, Hardware, and Services. If a potential buyer clicks on Hardware, then the system will open up the subcategories like Servers and Printers. By further clicking on Servers, then the options can be Web Servers, Storage Servers, File Servers, and so on.

While the user is navigating, the system creates at the top of the screen “breadcrumbs”, so that the user knows how he/she has come to this place and how to go back. The breadcrumbs path might look like:

“Home > Top Level Catalog > Hardware > Servers > …> Current location”

Making it Stick

As for rich media features, they would comprise everything that is not a static HyperText Markup Language (HTML) page, such as RealVideo, Adobe Flash, Microsoft Mediaplayer, Microsoft Silverlight, and so on. Previously, these gadgets could only show videos and play music and animation, but now users can write applications over them.

In IFS’ case, the most visible way to use rich media is to use the Sticky Notes feature. Basically the user can put a sticky (”Post it”) note (only logically, not really physically, duh!) onto any record in the system (e.g., customers, projects, orders, invoices etc.). The note “sticks” to the record and will be visible to all other authorized users who look at that same record. Inside the note users can put any content they can put in a regular “rich text” field in Windows.

This content includes, for example, pictures, hyperlinks, video clips, Objects Linking & Embedding (OLE) objects (or any embeddable document type), etc. The sticky note is to enable data to be kept that is not part of the normal system database (as a sticky note would be on a physical desktop),but that can be searched along with the data in the database. This serves the purpose of capturing knowledge in the organization and not just with an individual (see figure below). Could this capability also be a first date between user communities and enterprise applications?


booklet-p8-1-purchase-requsition1.png


Silver Lining in Silverlight?

But unless Microsoft Silverlight is used, Windows users are tied to the desktop, which means less reach and portability than in browser-based applications. Silverlight (formerly called WPF Everywhere [WPF/E]) provides a runtime browser-based deployment environment for WPF applications written in Extensible Application Markup Language (XAML). Silverlight is a subset of WPF and is designed to run cross-platform to enable Rich Internet Applications (RIA); WPF has some additional capabilities but assumes it is running on a “Windows box.”

Silverlight and WPF have had their share of many broad platform announcements at the recent Microsoft PDC (Professional Development Conference) 2008 conference. Relative to business applications-oriented UI design controls, there has been a panoply of new capabilities within the Silverlight toolkit, such as Charting, TreeView, DockPanel, WrapPanel, ViewBox, Expander, AutoComplete, NumericUpDown, and so on.

These controls should all also be available for WPF, which on its part has received the controls like DataGrid, DatePicker, Ribbon, Calendar and VisualState Manager. These controls are already also included in the Silverlight 2.0 announcement, albeit the Ribbon control from WPF is not in Silverlight yet.

These capabilities are in great part what the Dynamics team has been waiting for before jumping in broadly on the Silverlight/WPF bandwagon. In addition, Microsoft Developer Division is open-sourcing the Silverlight controls, so we can expect to see lots of advanced controls added by ISV’s down the track. Thus, Microsoft admits that visualization is a key area of investment for Microsoft Dynamics products, and as Silverlight capabilities around data expand, Dynamics products will add Silverlight experiences to their common controls.

This is not to neglect the work Microsoft has done around introducing role-tailored user experience (UX) across the Dynamics products, embedding role-specific and contextual analytics directly in the application UX, and introducing both breadcrumb bar navigation and action panes (the Office ribbon-style interface). Independent of what “plumbing” the company uses, these have been pretty dramatic UX changes, and similar to the abovementioned navigation gadgets in some of the other vendors’ products.

Bottom Line: Win-Win for Microsoft

Coming back to the second issue from the beginning of this blog series, i.e., Microsoft Business Division’s (MBD) Profit & Loss (P&L) statement, at the Convergence 2008 user conference, the giant stated the following stats for Microsoft Dynamics:

* A 26 percent revenue growth in Q2 2008;
* Nearly 300,000 customers worldwide;
* Nearly 10,000 business partners worldwide;
* About 1,700 Dynamics solutions in Solution Finder; and
* Over 14,000 customers and over 625,000 users of Microsoft Dynamics CRM.

Now, some nitpickers might say that Microsoft Dynamics is not a profit generator for Microsoft, if not even bleeding money due to all the ongoing product investment. Well, guess what, Microsoft is certainly not in dire need of cash to squeeze it out of Dynamics’ operations.

As some of you might know, now that Dynamics is part of MBD, which contains Microsoft Office, Dynamics, Exchange, Office Live and Unified Communications, the parent company doesn’t report the Dynamics business separately any longer in terms of revenue and operating income. However, Microsoft still discloses Dynamics customer billings figures every quarter, and here are the three data points it has publicly disclosed:

1. In fiscal 2006, the last time Dynamics was an external P&L entity, it achieved profitability in Q4, and was profitable for the full year;
2. In fiscal 2007, Dynamics crossed an important internal milestone of becoming an over US$ 1 billion business; and
3. For fiscal 2007 and 2008, Dynamics has reported a 21 percent growth in billings in each of those years.

But the thing that represents Dynamics’ “extra” contribution is the sale of all those Microsoft platform components to all of the customers of Dynamics. That is to say that Dynamics creates a “pull” for other Microsoft technologies.

Money for the Caviar

Plus, let’s not forget about all the revenue and profits coming from related sales of SQL Server, SharePoint, Office, Exchange, and so on to an army of ISV’s (many of which are even fierce Dynamics competitors).

Microsoft has also touted and recruited many ISVs for Office Business Applications (OBAs) as a way in which line-of-business (LoB) systems can be seamlessly integrated with the ubiquitous Microsoft Office productivity tools. Business applications are made possible by key platform capabilities, called OBA Services, in the Microsoft Office 2007 system that cater to the following features: workflow; search; the Business Data Catalog (BDC); a new, extensible UI; Microsoft Office Open XML Formats; and the Web Site and Security Framework.

Another example (staying on the IFS theme) is IFS Business Analytics, the first product from the vendor’s Intelligent Desktop initiative. This product is a business intelligence (BI) solution that extends Microsoft Excel from a desktop productivity tool to a full-fledged, enterprise-scale client for planning, reporting and analysis. In other words, users hereby benefit from using IFS Applications (and its embedded security and authorizations) within an already familiar Excel environment.



TARGIT BI Product Certified

Recently, I met over the Web with TARGIT’s Ruben Knudsen and Ulrik Pedersen, along with some TEC cohorts to verify TARGIT’S BI product. TARGIT had completed a TEC-designed RFI containing a list of BI capabilities that every BI vendor could support “out of the box.” The RFI is a common list of BI capabilities that we send to all BI vendors, and from the long list of TARGIT responses, we chose 172 entries for them to demonstrate. Without missing a beat, TARGIT demonstrated all the 172 selected entries from a total of 1900 criteria we chose using a “live meeting” and telephone sessions. TARGIT passed with 100 percent verification – no omissions.
TARGIT is a Danish company. Specializing in business intelligence, TARGIT works through a world-wide distribution channel that includes distributors (VARs), OEM partner relationships, and direct sales to the customer. Their product has both a real-time interface and a batch system interface. TARGIT standard data marts and OLAP facilities allow for drill downs to detailed information, desktop and Web reporting, as well as drill through analysis via SQL functions. TARGIT indicated that the majority of their implementations are done via resellers.

I asked Ruben and Ulrik about TARGIT as a company and I liked what I heard. Many of their 75 employees and 80+ distributors have been with the company since it was founded, 22 years ago. Major clients include municipalities. They do most of their sales via distributors such as Fujitsu, and retailers. Their product is very robust, efficiently coded, and geared to the tier-2 and tier-3 companies (from 50 employees to 500, and companies with $500 million in earnings or below).

From a technical perspective, there are several BI business areas for which they provide out-of-the-box analytical solutions that interface to several different enterprise resource planning (ERP) systems: Microsoft Dynamics AX, NAV, GP, Oracle, SAP. The business areas covered include supply chain management (SCM), customer relationship management (CRM), finance, sales, and projects.

TARGIT is strong in the area of activity-based costing analytics. From the user interface, TARGIT provides scorecards, bubble charts, dashboards, hierarchical org charts (not the chart itself though) and alerts. If the client wants to, he can build reports using the usual statistical distributions and the associated functions for regression and correlation analysis, calculations of the mean, variance, standard deviation, and other statistical measures.

The Whitelisting of America: Security for IT

Once upon a time around 1995, the well-known American agency, the National Security Agency (NSA), decided that there was no computer operating system that was adequately secure for their needs. In analyzing the risks, they found that while UNIX was the most secure, they needed additional protection. They looked at the industry of anti-virus protection, at problems with Trojan software, at the problem of keeping up with virus authors, and at the requirement for government level security to prevent a corrupted module from secretly penetrating their operating or business system environment. Their conclusion was that “anti-virus blacklisting” is ineffective and isn’t worth a pinch of dung.

Eight years ago, in 2000, the NSA produced a kernel extension called Selinux for UNIX and also for Linux. What this extension did was to manage a catalog of permissions for files and processes. The way it generally works is as follows:

An operating system-owned rule database consisting of a set of permissions (i.e., a “whitelist”) is generated, and every executable (be it operating system or user program) is assigned permissions according to its functionality. A module not in the permissions database is not allowed to execute. Modules with matching rules in the database are limited to permissions determined by the rules. The restrictions may include other then execution privileges such as not being allowed to do things that the module was not designed to do. For example, a video driver is not allowed to write to a file. A second example is that of a “user program” rule which allows the module to write to a subset of of all public data directories. Digital signatures are used to certify each module and to ensure that a replacement module is blocked from execution until a new certification rules/signature for it is registered.

Selinux has been a great success, and as a result, it is used in many Unix versions and in Red Hat Linux since 2000 and in SUSE Linux since 2005. Whitelisting is proven.

More recently (2008), this capability is still not present in Microsoft operating systems. As a result, third-party vendors are looking at the Linux successes and asking “Why not?” Several competing vendors have recognized that if Linux systems can do it, then so can they do it for the Microsoft platforms. Several companies have brought their whitelist product to market. Each has developed its own whitelisting facility, sold to compliment the traditional anti-virus software for Microsoft platforms. Whitelisting for other platforms works in a similar manner to Selinux. Rather then block bad applications that it recognizes by signature, it allows execution of those applications and processes that match the security rules and internal certificates. Again, for example, a video driver is only allowed to be a video driver, and nothing else. In the same vein, a program may be allowed to write to a file, but not to use video, or be allowed to trap keystrokes. In effect, like Selinux, whitelisting builds good fences around all modules, and makes for a very secure system.

Where whitelisting comes into play big time is in the protection of database systems. All database systems are in need of protection from module replacement or from modules that misbehave. Whitelisting builds and manages the permission “fences.”

A reasonably good article about whitelisting, for Microsoft Platforms, written by Jason Brooks “Toward A More Idiotproof Internet”, was published in eWeek on October 1st, 2008. Selinux information is available from the National Security Agency (NSA).

Job Scheduling Maze in Distributed IT Landscapes

We certainly learn new things every day, and sometimes out of pure serendipity. Namely, when I was recently asked by one of my industry contacts (working for a PR agency) whether I would like to have a briefing with and about his client, whose name included the word “batch” as a part, I agreed, thinking it was a process enterprise resource planning (ERP) vendor or maybe a manufacturing execution system (MES) vendor.

To my chagrin, as soon as the Web conference meeting and demo started, I realized that I was in the quite unfamiliar territory of enterprise job scheduling and workload automation, especially when it comes to highly diverse and distributed information technology (IT) environments. Many of these jobs are still run in a batch mode without human interaction and intervention (at least preferably).

Some batch job examples would be: database/data warehouse updates, payroll runs, file copying and archiving, systems rebooting, disks de-fragmenting, reports printing, processing insurance claims, billing statements, and/or enrollments, file transfer protocols (FTP), and so on. Thus came the ActiveBatch product name, I guess.

For someone who is not that technically proficient, the knee-jerk reaction was to say: “Sorry, this was a misunderstanding”, but I was drawn in by a good discussion and the product’s apparent usefulness. In fact, the briefing made me think about how often we take things for granted, and how unaware we are of the legwork that IT staff, especially within large corporations, is conducting behind the scenes. We are usually aware of the IT folks only when our PC is not working or the email server is down, and when we “need help now!”

The conversation made me realize that this vendor truly understands mission-critical business that requires high-performance systems. ActiveBatch originators know what it means to meet both deadlines and service level response times. Conversely, while the commonly used enterprise applications, databases, platforms (e.g., Microsoft Windows with Task Scheduler or UNIX with Cron) contain limited scheduling functions, these address only basic requirements within the confines of the individual system.

In fact, from my erstwhile experiences as a Baan (now Infor ERP LN) and SAP R/3 functional consultant, I vaguely recall some task scheduling functions (e.g., setting overnight or weekend material requirements planning [MRP] runs, or performing a trial balance and general ledger [G/L] updates at a certain time). The technical consultants would set up these batch jobs within the Baan Tools or SAP Basis administrative capabilities.

Overcoming Individual Systems’ “Autism”

However, the problem is that as the number of systems, applications, databases, and whatnot platforms increases, the IT business community requires automation among and across these various systems in an end-to-end manner to provide a single point of workload automation. In addition to the ability to integrate all the pieces of a heterogeneous environment, such a system should be able to schedule jobs based on events and associated built-in business logic, as opposed to merely on a set date/time, and in a single-path manner within a silo (as is the case with inwardly-oriented “autistic” enterprise systems).

To that end, ActiveBatch provides a central point of scheduling that allows each of these disparate systems to be automated and integrated into coherent workflows. The essence of ActiveBatch, Intelligent Automation, is to provide the level of integration for applications, platforms, databases, and specific functions without the need for costly and tedious code scripting (and reliance on programmers).

What prospective customers look to ActiveBatch to accomplish is the following:

* Improve IT service levels;
* Integrate workflows and business processes between diverse applications and platforms;
* Reduce the number of manual errors (which otherwise come with the territory of relying on humans);
* Implement a centralized view of jobs that span across the vast platforms’ landscape; and
* Eliminate an “artificial” wait or idle time that has to be built into existing workflows (to accommodate all imperfections).

In a nutshell, this is about ultimately reducing the cost of IT operations. Some customers cite the fact that often in the past the original enterprise job scheduler (typically made in-house in a heavily customized manner) would allow some jobs to fail without notifying anyone, and the company wouldn’t know anything about it until someone belatedly complained about not getting the output of the job (e.g., a report). Tracing and fixing these instances would often take more time and resources than the job scheduler was supposed to save in the first place.

ActiveBatch and Advanced Systems Concepts Inc. (ASCI)

But let me backtrack and talk about the ActiveBatch’s genesis. The company that owns ActiveBatch is privately held Advanced Systems Concepts, Inc. (ASCI), with headquarters in Morristown, New Jersey (NJ), United States (US). It was founded in 1981 as a system software engineering and consulting company focused on the development of products for former Digital Equipment Corporation’s (DEC) OpenVMS operating system (OS) product (meanwhile of course, DEC was acquired by COMPAQ, and now both are part of Hewlett-Packard [HP]).

ASCI’s first product, INTACT, was a transaction processing system for OpenVMS to allow customers to use OpenVMS systems in commercial applications. INTACT was licensed to major financial organizations around the world. In late 1986/early 1987, DEC exclusively licensed INTACT from ASCI, and renamed it DECIntact, as a solution incorporated as part of its erstwhile Enterprise Application Strategy. DEC’s decision at the time was based on the need to compete with IBM and its transaction processing system, Complex Instruction Computer Set (CICS), with a counterpart competitive solution. INTACT and DECIntact are both still in use in many organizations around the world.

ASCI has also developed other layered product solutions for the OpenVMS market including:

* SHADOW – the first shadowing/data replication system for OpenVMS;
* WATCH – for help desk and other similar functions that allow one user to watch other terminal sessions;
* Performance Simulation System (PSS) – the automated regression and application testing system for the OpenVMS applications; and
* VIRTUOSO – a virtual container technology that enables the development of virtual disks to be used as random access memory (RAM) and cached disks for performance improvements, encrypted disks for security, and more.

In 1991, ASCI enhanced SHADOW with a new family of products including FileSHADOW and RemoteSHADOW for OpenVMS. RemoteSHADOW for OpenVMS has since been installed in over 1,000 customer environments to reduce the time of data recovery in the event of a system or site loss. For instance, in 2001, during the dreadful 9/11 attacks, RemoteSHADOW for OpenVMS was used by many financial organizations, including Dresdner Kleinwort Wasserstein (DSW) Bank, to recover their data and get their business functioning again within hours at an alternate site.

In 1996, ASCI broadened its OS focus from exclusively OpenVMS to include UNIX and Microsoft Windows. One of the UNIX-based products that is still in use is DeviceShare. For its part, XLNT (Extended Language for NT), a command and scripting language for Windows (not only Windows NT, as the name would imply), was introduced to offer system administrators a scripting alternative to managing systems and developing workflow scripts. Over 100,000 XLNT licenses are currently in use around the world.

Enter ActiveBatch

In 1998, XLNT users needed a batch system to automate and run their XLNT scripts on a schedule. ASCI thus introduced Batch Queue Management System (BQMS) to assist XLNT users to automate their scripts on and across Windows servers. In 2000, BQMS was renamed and re-introduced as ActiveBatch V3, a heterogeneous job scheduling solution for Windows, UNIX , Linux, and OpenVMS systems.

ASCI proudly claims to be self-funded, and that its development, quality assurance (QA), and support teams are its own (and not outsourced or off-shored). The company has licensed ActiveBatch to over 1,400 customers in 34 countries around the world.

With its Intelligent Automation capabilities and performance, and having been tested across 2,000 servers, performing over 1,3 million jobs per day, ActiveBatch is fast becoming the Workload Automation and Enterprise Job Scheduling solution of choice. Additionally, ASCI has licensed nearly 4,000 clients in 34 countries around the globe for the full range of its products. Its clients include many of the Fortune 1000 companies with a mix of medium to large enterprises.

ASCI competes with several powerful and renowned application providers in the Workload Automation and Job Scheduling market including Computer Associates (CA) Unicenter AutoSys, BMC CONTROL-M, and IBM Tivoli. Many of these vendors’ products were originally developed for the mainframe, not necessarily for today’s heterogeneous, horizontal server environments. As a result, they have customarily been adapted (retrofitted) to today’s distributed server environments.

More modern job scheduling vendors’ products like Tidal Software, UC4, Redwood Software (especially in SAP environments), Quartz (open source), and of course ActiveBatch, understand the distributed server environments, and have targeted their solutions to address this requirement. Part 2 of this blog series will analyze the ActiveBatch architecture and evolution in terms of functional and technical capabilities.

Job Scheduling Maze in Distributed IT Landscapes

ActiveBatch Architecture

The ActiveBatch architecture has always been a multi-tier approach (enabling centralized job scheduling with distributed job execution) consisting of the following elements:

* ActiveBatch Job Scheduler — This Microsoft Windows-based layer consists of the ActiveBatch automation intelligence and logic to understand the requirements presented in operating a real-time, event-driven system;
* ActiveBatch Backend – This database tier is where the “job” definitions and templates are maintained along with how the “jobs” are to be triggered (i.e., date/time-based, event-based, data-based, or on-demand). This layer also determines which resources are required (e.g., servers) to run the defined jobs, and what to do upon a job’s success or failure, or based on other information. This layer uses either Microsoft SQL Server or Oracle database;
* ActiveBatch Client – This client layer is the interface in which the user, developer, or operator interacts with the ActiveBatch system. Workflow design, programmatic connections, or simply review of jobs’ status (success or failure) can be monitored or reviewed. There is a raft of possible client-side technologies, such as Microsoft Common Object Model (COM), Web Services/Simple Object Access Protocol (SOAP) for cross-platform environments, Windows-based applications, Command Line Interface (CLI), Web Server interfaces, Personal Digital Assistant (PDA) gadgets, SmartPhone devices, and wireless BlackBerry handheld devices; and
* ActiveBatch Execution Agents – This tier represents the physical or virtual hardware and software systems, which execute the “jobs,” applications, processes, etc., as directed by the abovementioned ActiveBatch Job Scheduler. Execution agents can run on a number of operating system (OS) platforms like Microsoft Windows (including Windows Vista, Windows Server 2003, Windows XP, and Windows 2000), UNIX, Linux, OpenVMS (including the Alpha and Itanium brands), and IBM z/OS (for the ActiveBatch Job Library feature that will be explored later on).

A fully operational ActiveBatch instance requires each of the basic architecture layers, i.e., Job Scheduler, Database, Client/User Interface (UI), and a minimum of one ActiveBatch Execution Agent. Most ActiveBatch shops have Windows systems. But even some like, for example, LiveWire Mobile that have over 90 Linux servers and only a few Windows systems, still strongly support ActiveBatch as a key approach to integrating applications, databases, and platforms into coherent workflows.

Panoply of Job Types and User Views

As mentioned in Part 1, ActiveBatch attempts to provide “Intelligent Automation” for its users with an approach that can minimize or eliminate scripting. Up through Release 6, the product has offered the following five types of jobs (by comparison, most of other counterpart job scheduling products only support one or two job types):

1. Process Job — lets users run user-written scripts and/or executable program files. For example, I could un the Windows Internet Explorer executable (IEXPLORE.exe) and pass TEC’s Uniform Resource Locator (URL) as a parameter to open the TEC’s home page. Worth noting here is the “Copy Script to execution machine” option for running scripts as a process, as well as the ability to allow pre- and post-job steps;
2. File Transfer Protocol (FTP) Job — can initiate a series of FTP and/or Secure FTP commands in a heterogeneous fashion, using the following secure protocols: Secure Socket Layer (SSL) v3 and v2, Private Communications Technology (PCT), Transport Layer Security (TLS), Secure Shell (SSH), or without embedded security;
3. File System Job — lets users perform operations such as Copy, Delete, Rename or Move Files, or Create New or Delete a Directory, without regard to the specific platform they are on, and what Command shell they might need to use.
4. Email Job – lets users compose e-mails, whereby email servers and/or clients that leverage Simple Mail Transfer Protocol (SMTP) can be utilized (not necessarily only Microsoft Outlook or Microsoft Exchange Server). A notable capability is the use of variables to create alerts/triggers (e.g., when a certain stock symbol reaches certain value); and
5. Script Job — where the script (rather than an executable file, e.g., a Visual Basic Script [VBS] piece of code) is placed into an ActiveBatch job so that it can be executed on any system rather than being limited to the system where the file resides. Scripts using virtually any script language can be used, and if ActiveBatch does not provide an extension the designer can simply add it to the list. However, the server running the script must be able to associate with the relevant file extension.

Various Views to Monitor Jobs

Furthermore, ActiveBatch comes with a number of different ways of viewing and monitoring ongoing enterprise jobs. To that end, the Run Book view shows operators’ job schedules in a calendar view (à la Microsoft Outlook), whereas the Daily View shows daily job executions in a detailed list. Both views can be used by operators to monitor the status of jobs that have been run, are still running, or are scheduled to run (past, present, and future). Jobs can be filtered by days or execution status (i.e., “Show me only jobs that have failed, aborted, not run, or are still executing!”).

The Gantt View can be used by job designers as well as administrators and operations staff to identify load levels (to balance loads) or quiet periods for system’s planned maintenance, etc. Finally, the Administrators View helps administrators with setting policies, defaults, etc. Part 3 will unveil the latest enhancements and options offered within Release 7.

ActiveBatch Evolution

ASCI continues with ActiveBatch’s ongoing development with a cycle of 18 months between versions. This allows the vendor to offer existing and new customers the ability to take advantage of new features and approaches in technology by applying them for improved performance and usability.

In the ActiveBatch V4 product release, the ActiveBatch Backend layer was changed from a proprietary database to the much more standard Microsoft SQL Server and Oracle. This has enabled the vendor to take advantage of the database programming power that the “stored procedure” capabilities could deliver. Also, these two database systems were the primary databases used by the ASCI’s target marketplace as part of their IT infrastructure, thereby reducing the learning and training costs for users.

Moreover, ActiveBatch V4 added support for additional OS platforms such as IBM AIX, HP-UX, and Linux to the previously supported Windows, Sun Solaris, HP Tru64 UNIX, and OpenVMS environments. The client interface was updated with a new graphical user interface (GUI) for drag-and-drop operations to simplify the design of workflows. Finally, the High Availability capability was added, remote management access was provided for Internet access, and remote management was made possible using BlackBerry devices.

In the ActiveBatch V5 release, performance became paramount as ASCI took advantage of the power of the supported databases. The vendor was able to test actual performance of up to 2,000 disparate server connections (i.e., execution agents), and over 1,300,000 jobs triggered in a 24 hour period. Moreover, ASCI says that there are no architectural limits in this regard.

As for managing workflows and all ActiveBatch objects such as Jobs, Schedules, Calendars, Users, Servers, Alerts, and more, the product was now able to put these objects into a one container called a Job Plan. Finally, while ActiveBatch had always fully exposed its Microsoft COM interface for programmatic access to its objects, methods, and properties, this release added a Web Services programmatic access for users who were not Windows-based, for true cross-platform capabilities.

ActiveBatch V6 – The Game-changing Begins?

The ActiveBatch V6 release had many major enhancements. For one, it introduced the framework for the abovementioned Job Library, containing templates to applications and key functions used by IT departments in support of the customer’s business. The goal was to reduce code scripting, so that users could simply add key information (e.g., select options in a wizard-like style), and ActiveBatch would be able to run jobs behind the scenes.

These templates include a variety of jobs, such as follows: Structural Query language (SQL) routines, Data Transformation Service (DTS) packages, SQL Server Integration Services (SSIS), Crystal Reports creation, etc. The library also caters to functions like Secure FTP, file archiving, ZIP file operations (compressions), email jobs, etc. The use of ready-made job libraries within ActiveBatch eliminates both errors through reusability and the hassles of creating scripts. In layman terms, complex workflows can be composed by selecting options from a library of routines rather than via pesky coding of scripts.

The V6 release also featured a much improved audit system for compliance and control with both internal policies as well as governmental regulations. In addition, there is the capability for dynamic policy auditing using the Audit Variable feature. Also, policies can be mandatory or optional, and users can also conduct policy version comparisons.

The release offered HP OpenView and Microsoft System Center Operations Manager (SCOM, formerly Microsoft Operations Manager [MOM]) to fully manage and monitor objects in the ActiveBatch system around the clock. V6 also introduced the ActiveBatch Mobile capability for Smartphones and PDA’s (beyond BlackBerry devices).

Furthermore, ActiveBatch Windows execution agents now fully support 64-bit systems as well as 32-bit ones as appropriate. Other general changes for improved IT service levels entailed the following:

* setting expected start times for event-based jobs;
* alerting for delayed and late running jobs;
* Windows PowerShell; and
* setting the maximum dispatch time.

As another major enhancement, ActiveBatch 6 introduced the concept of Virtual Root to allow for its job scheduler to be made available as a multi-tenant utility within or outside of the enterprise. Each organization’s unit will see its “Job Plans” populated with its ActiveBatch objects, but, if so required, will not be able to see or access other users’ plans and objects. In other words, each user/tenant is isolated, secured, and “blind” or transparent to the other. Each unit’s plans are published as directory references for secure access.

Last but not least, event management capabilities were enhanced for non-Windows-based systems. The options range from file triggers, centralized logging (log files can be made directly to the UI/client regardless of platform type), and silent (push) installations. The system also features integration of jobs to be executed on a mainframe system, or for a mainframe job to trigger other workflows on Windows, Linux, UNIX or OpenVMS systems.

Excellent Value in the ETO Landscape: How Global Shop Solutions Excels for ETO Industry Challenges

The needs of the changing business, economic, and technology landscapes have given organizations that manufacture precision and complex products an added incentive to see how they can manage to contain rising costs, and stay competitive against global competition and market volatility. The findings were quite unlike those experienced by tradiditional discrete manufacturers, since discrete manufacturers do not deal with the same intricacies of mass customization as ETO manufacturers. It was with this as a backdrop that I and fellow TEC analyst Leslie Satenstein engaged in conversation with Global Shop Solutions ERP engineer Marc Atnipp in order to review their product One-System as part of a recent TEC Certification exercise.

A Word About the Vendor

Over three decades ago, from deep in the heart of Texas, Global Shop Solutions founder Dick Alexander was a proponent of such core manufacturing principles as continuous improvement and just-in-time manufacturing. In these approaches the goal is always to reduce and eliminate non-value-added activities which contribute to delays in production and ultimately drive costs upwards through excessive machine set-up costs and improperly managed inventory among other things. Through the core principles which underlay the development of this software, the product struck an eager and responsive chord among its initial customers in the small to medium sized manufacturing space within the “custom job-shop” and engineer-to-order segments of the market. The initial market acceptance of these principles and a vendor that could deliver on the promise propelled the organization though its growth and development. Now, three decades later, Global Shop Solutions is truly a global enterprise solution, with customers not only in the US and Canada but also in South America and down-under in Australia.

What We Liked About the Software

Global Shop Solutions has developed a fully integrated manufacturing and accounting ERP package for custom ETO shops spanning a range of industries, including aerospace and electronic and precision machine and fastener manufacturing. Among the neat features was the GUI icon display menu configuration, the product’s embedded BI application to capture KPI scorecard information at a glance. It also had a rich and comprehensive tool featuring web-enabled scheduling and graphic scheduling tools which can at a glance look at current capacity or constraints across the shop floor, or drill down to specific work center or machinery. The ETO offering supports a “Product Configurator” tool which is mission-critical for any company whose business model is dependent upon the ability to integrate design and engineering changes, estimates, and quotes in short order on a daily basis. The ETO ERP application offers standard CRM functionality, which means it can be a one-stop single solution for those firms looking for a value proposition between functionality, budget, and feature. The system had its own built-in report generator, permitting the user to easily create reports on the fly. This can be useful for organizations that have inventory at multiple locations and need to either reconcile physical inventory counts or transfer semi-finished materials between plants and company warehouse locations.

Lawson Standing Vertically in a Flat Economy

What About Food and Beverage?

As someone smart once said, “People have to eat and drink in both good and bad times”, so the F&B sector should not be that badly affected by the downturn. Sure, the premium brand manufacturers will likely suffer, but the low-price and private label items might even flourish.

In late October, Lawson made two announcements at the InterBev 2008 Conference and Exhibition in Las Vegas, Nevada (NV), United States (US). This was the validation of Lawson’s vertical strategy that was professed at its CUE 2008 user conference, and was soon delivered with the Lawson Tracer product. These industry-specific modules all have features that are unique to the F&B industry or are solving that industry’s specific business requirements.

Industry-specific Analytics

The Lawson M3 Analytics for Food & Beverage module helps F&B companies access meaningful business intelligence (BI) to improve decision-making without having to painstakingly develop analytics tools in-house. In fact, industry-specific BI solutions that can be up and running to provide value within days and weeks are Lawson’s attempt to mitigate the current economic crisis for its customers.

Thus the Lawson M3 Analytics for Food & Beverage application includes 70 pre-configured key performance indicators (KPIs) and 50 pre-built scorecard reports commonly used by F&B companies. Sample KPIs include day sales outstanding (DSO), inventory turnover, delivery performance, and gross margin percentage. Sample scorecards highlight critical data such as sales vs. budget, supplier performance, production variances, and customer debt.

This selection of metrics is engineered to meet the specific needs of an F&B company management. It includes what the executives and middle managers need, and does not include KPIs that are meaningless (which is of course the case with a more generic, “one size fits all” approach). The analytics set also includes KPIs not seen in other industries, like yield. Such a comprehensive approach to business evaluation has been essentially beyond the reach of all but the largest F&B companies until now.

Lawson Analytics for Food & Beverage helps F&B companies benchmark, measure, and improve performance in the following five key areas: sales, finance, procurement, production and the warehouse. With virtually all manufacturers currently concerned with burning cash, they need their existing systems to deliver more value faster, specifically in terms of improving cash flow and slashing costs. Lawson’s industry-specific analytics should help provide answers to those critical questions such as “what and where are our inefficiencies?”, “where are we losing cash?”, or “which processes are slow?”

The application enables tracking of multiple performance metrics by individual products, customers, and account managers to help decision-makers identify underperforming operational areas in time to take appropriate action. It also helps F&B companies eliminate unnecessary reports so decision-makers receive only the right information at the right time.

Industry-specific Planning Tools

In a related announcement, Lawson also announced the availability of Lawson Stock Build Optimizer and Lawson Planning Workbench for Food and Beverage. These new applications aim to help F&B manufacturers improve long- and mid-range production planning to ensure that the right amount of the right products are available at the right time to meet seasonal and promotional peaks in demand. F&B companies traditionally have to choose the lesser of two evils:

1. Losing sales if they don’t produce enough products to meet demand spikes, or
2. Writing off perishable products if they produce too much.

Lawson Stock Build Optimizer helps companies visualize their overall plan for building and maintaining an inventory of finished products. The F&B industry has two relatively unique requirements: it is deal- or promotion-driven with both customers and F&B manufacturers having a history of impacting the timing of transactions based upon promotions. This module allows the supply chain to be leveled to eliminate problems in timing. Stocks need to be built up in advance of the promotion period, stock-outs need to be eliminated, and inventory investments minimized.

Lawson Stock Build Optimizer then offers tools that allow manufacturers to perform multiple “what if” scenarios to simulate the consequences of different long-range planning decisions. These models, which can account for a wide range of variables from production capacity to ingredient costs, help planners refine master production schedules (MPS) across multiple manufacturing sites. For example, planners can use these models to evaluate the benefits of building stock in advance to support demand spikes, versus using overtime or subcontractors to meet seasonal demand for products such as holiday chocolate assortments.

For its part, Lawson Planning Workbench for Food and Beverage should help F&B manufacturers improve mid-range planning decisions as they balance changes in demand and supply availability during production. Companies can visualize their total coverage days for each product to guide production planning decisions for the next few weeks or months.

The application then captures and provides a full view of production variables, such as changes in customer orders, delivery schedules, employee shifts, and aging inventory. This allows planners to conduct “what if” modeling before deciding how to prioritize production for specific products and orders to help avoid stock-outs, inventory write-offs, or the need to temporarily open additional production lines.

Both Lawson Stock Build Optimizer and Lawson Planning Workbench for Food and Beverage are configurable to users’ specific needs. Both applications also offer simplified installation and support through integration with the Lawson M3 Enterprise Management System [evaluate this product].

Dear readers, what do you think? Is this a well thought-out value proposition from a vendor to help its F&B customers during bleak times or merely a vendor’s repackaging exercise to cash in on the current economic crisis? Should virtually all vendors try to come up with similar industry-specific initiatives and thus justify their existence and customers’ investment and trust?

What are your opinions about whether these new products will help F&B manufacturers analyze an increasingly complex set of supply chain variables to help them optimize production plans, lower inventory costs, and enhance customer service? What steps are you taking in these regards?

Ramco OnDemand ERP Certification

First Impressions
The forms layouts and many of the detail screens allow for data field relocations and reconfigurations via drag and drop. The default color schemes are easy on the eyes, and screen layouts are ergonomically designed. A user can take more frequently used fields and bunch them together; fields that are not used can be squeezed to insignificant size.

The product handles everything from the sales quotation to the final shipment, including all financial, inventory, and manufacturing aspects. We noted good functionality in the sales and purchasing areas.

Inventory management is thorough and supports multiple locations (such as warehouses, and even multiple aisle and bin locations for a product).

In the fulfilling of a production order, Ramco demonstrated that material would be drawn from the nearer warehouse and bin location according to first in, first out (FIFO); last in, first out (LIFO); and other rules. Full lot and serial number support were included.

Choosing an ERP Product
If your organization is making the transition from a small start-up operation to a medium sized organization, then Ramco’s SaaS product offering might be a smart choice since it includes standard accounting functionality, such as A/R, A/P, costing, and sales. More functional purchasing, HR, and CRM interfaces, as well as an advanced planning tool, are currently under development and should be available soon.

The manufacturing scheduling functionality is work area–based, where mixed types of the same machines can be pooled together to add to capacity. Work center reports are available to show how much of the work center capacity is remaining. The product’s material requirements planning (MRP) and master production scheduling (MPS) interfaces are easy to use, intuitive, and what other vendors will have to compete against for the same class of product.

Being a SaaS product, all of Ramco OnDemand ERP’s functionality is global. Accepted customizations become global on demand enhancements. These enhancements, patches, etc. when implemented, become effective immediately.

Where Does On Demand Fit?

The Ramco on demand SaaS product appears as an excellent entry point for emerging businesses such as light manufacturing. For light manufacturing or small shops, SaaS is a money saver, and coupled with Ramco’s virtual machine access facility, provides an economic ERP solution. For the aerospace or automobile industries, where deeply nested multi-level bills of material (BOMs) are the norm, the product is usable, though in these industries, reporting by machine within a work-center obligates more data capture and more drill-down reporting.

Ramco OnDemand ERP cannot be used by manufacturing organizations that require the ability to track global efficiencies by work center, since this functionality is not yet available in the product. However, for companies looking for a robust scheduling and work order planning tool, which allocates materials to specific work orders, Ramco’s SaaS ERP product offering can satisfy such requirements effectively.

Because the product is SaaS-based, the client is saved from performing backups and system maintenance, and from up-front licensing fees. At this time, costs are per seat, and an amount is charged for data storage and by quantity of business transactions.

Ramco OnDemand ERP is configured to your business requirements and typically takes less than a week to deploy. As your business grows, the solution can be scaled up to accommodate multiple locations, currencies, and business units. The application stays tuned to your business all the time.

Ramco OnDemand ERP integrates multiple functions and systems into one solution and gives you total visibility and control of operations. In the process, it helps you focus on growing your business.

As a virtual machine application, no separate processor is dedicated to running the on demand application. Contrarily, the virtual machine solution is termed evergreen, in that multiple virtual machines share a real computer on an on demand basis, and allow for lower operating costs. These lower operating cost benefits are passed on to the clients.

For more information, please visit TEC’s vendor showcase. Please click here to understand the application’s support concepts, and here for product details, where a demo can be viewed.


Free Blogger Templates by Isnaini Dot Com and Bridal Dresses. Powered by Blogger