Preparing for a complex, event-driven future, leading-edge businesses are pushing the software community to focus application development on a vision of business-driven, integrated processes. The conclusion of our series focuses on key components.

InformationWeek Staff, Contributor

May 4, 2004

14 Min Read

Preparing for a fast, complex, event-driven future, leading-edge businesses are pushing the software community to focus application development on a vision of business-driven, integrated processes. The conclusion of our series addresses key components of the solution.

The goal of a service-oriented architecture (SOA) is to create a loosely coupled IT infrastructure. Ensuring that it adheres to this goal is of paramount importance. In the first two parts of this article, I set the scene for SOA and described three of its five major components: the integration framework, synthesized computing, and the abstracted service taxonomy (AST). I'll pick up the discussion here with some remarks on technical considerations and then finish up with a description of the major aspects of the final two components: business activity monitoring (BAM) and events and the full life-cycle process.

Technical Matters

What's the chief constraint in trying to realize the SOA vision? Primarily, it's that current applications are tightly coupled while the underlying SOA technology is still evolving. Therefore, it may not be possible — at least not in a cost-effective manner — to achieve loosely coupled perfection given the existing environment of tightly coupled, business-critical applications.

I believe SOA technology is worth the potentially Herculean efforts required to create loosely coupled, reusable services. Simple interfaces, exposed at the right level to transmit and receive functionality, are the key to SOA. While exposing the right level of detail, it's vital that the interfaces not be too complex, which will limit their use and reuse. Services exposed as tiny, incomprehensible units will be too hard to understand: On the other hand, not enough detail could limit the usefulness of the services.

Current technology enables connectivity, but it doesn't automatically produce good architecture. Only by following key design principles will a loosely coupled architecture succeed in increasing business functionality, reducing software costs and development time, and improving maintenance. The following are some pointers to help you create a well-architected SOA:

  • Services must be loosely coupled. They must be self-sustaining so that they can interoperate with other services without unnecessary internal dependencies.

  • Services must perform discrete tasks and provide simple interfaces to access their functionality to encourage reuse and loose coupling.

  • Message exchange between services must be coarsely grained, descriptive, and utilize documents and schemas - which are more extensible than traditional remote procedure code interfaces.

  • Extensibility and versioning must ensure backward compatibility with service consumers who may be spread all over the world. Otherwise, it will be nearly impossible to update consumers with service changes.

  • When appropriate, services must support asynchronous messaging to decrease time dependence and, thus, tight coupling.

  • Services must utilize metadata to identify service semantics, capabilities, and constraints.

Security, including authentication and authorization, should adhere to a federated model. This approach is in step with the loosely coupled style of an SOA.

BAM and Events

Like SOA, event-based computing relies heavily on asynchrony and a loosely coupled architecture. An event-based architecture enables the services and systems within an SOA to listen for events and then send these events to interested parties. Moving forward, organizations will need this ability to cope with the multiple trading partners, diverse and distributed employees, applications, and hardware devices that need information dispatched to and transmitted from them in near real time. Indeed, technologies such as RFID will make having an event-based architecture for sending, receiving, filtering, and collaborating on events increasingly necessary. Event-based architectures will help organizations avoid information overload by filtering out noise that would prevent them from detecting early warning signs.

An event-based architecture must contain and/or enable several important capabilities. A mechanism to publish and subscribe to messages should be at the heart of an event system. This technology has been available for many years within the confinement of proprietary enterprise application integration (EAI) packages. Publish-and-subscribe messaging systems are used heavily on Wall Street: An SOA will help make such systems possible for a fuller range of business contexts.

Events must have real-time access to a large portion of the enterprise's information, including data warehouses, business rules, and preferably the entire abstract service taxonomy (AST, described in Part II) to enable people to make intelligent decisions in real time. Business applications, networks, PBX systems, and other devices are also valid event sources or targets when their performance affects business processes and operations. Along with access, events need to be able to invoke either automated processes or call for human intervention.

An event-based architecture must support long-running transactions that might aggregate, correlate, and analyze multiple events during processing. For example, imagine an order process that receives an out-of-stock event on a key order on Monday and a notification from Fed Ex that it's running behind schedule on Friday — the day the revised order was scheduled to ship. The system must correlate these events and dispatch the necessary notifications.

Appropriate systems must store the increasing amounts of contextually aware data that's flowing around events, raising enterprise awareness and enabling improved decision-making down the road.

There are three generally recognized levels of event handling:

  • Notification, which sends information to a subscriber.

  • Closed-loop processing, which produces actionable events that are bound to requisite business rules that route or escalate events as warranted. This is especially critical when human involvement is triggered.

  • Predictive events, which learn from past occurrences. For example, a system might learn that all credit hold orders for a particular customer should be shipped regardless of status. In some cases, these adjustments may be made automatically, or, more conservatively, may recommend that a human change the rule or halt the overrides.

Asynchronous programming and loosely coupled architectures are becoming increasingly common with .Net and J2EE products, as both sides compete to make theirs the SOA platform of choice. Events decouple message senders (publishers) from receivers (subscribers); therefore, event systems are inherently asynchronous and loosely coupled. This approach has actually limited the use of such systems in the past, but as developers grow more comfortable with asynchronous programming, thanks to improved development tools and SOA-driven necessity, events may increasingly weave their way into applications that benefit from the flexibility they offer.

In an event-driven system, a subscribing event (or events) invokes the next step of each application. Composite applications that branch in many directions will benefit from this increased flexibility.

BAM enables real-time digital dashboards and event processing. BAM is becoming a standard component of business process management (BPM) suites. It's useful to be able to extract and aggregate data from many systems to build real-time data views and coordinate events. Thus, BAM has dual purposes.

As part of a BPM suite, BAM has access to multiple systems involved in a given business process. BAM views can combine this information with that from data warehouses and forecasting systems, producing sophisticated real-time insight into the enterprise. Organizations can provide different company stakeholders with BAM dashboards customized to their needs. A sales view, for example, could enable salespeople to see orders and related details, including percentage of total. An operations view would show shipments and picking errors. The CEO could see daily and monthly totals, compared-to-plan tabulating, and more, in real time.

BAM is contextually aware of executing business processes because it's bound to the associated process models. Therefore, BAM has immediate knowledge of process deviations and can send alerts. This form of event trapping is very powerful because of its simplicity and accuracy.

As with most technical products, convergence will occur; BAM vendors will probably expand their feature sets to enable more general event infrastructure. Perhaps, one day, the AST infrastructure will be so pervasive that business process context will be seamlessly available to all interested parties.

Full Process Life cycle

What's a "full process life cycle," and why is it valuable to software? First, let's briefly examine the current software development process using a common scenario. A business professional defines a need and documents it; IT and the business users meet about the requested change to clarify the business goals and IT capabilities; IT develops a specification detailing the work to be done; IT begins coding; subsequent change requests are received over time; IT implements the changes. Business professionals lose control of the business process as soon as they turn the specification over to IT. By the time changes occur, the business process itself, as information workers understand it, may or may not correspond with the executing software process. In addition, the current development process requires many manual work-arounds and suboptimal automated steps because traditional software doesn't fully automate end-to-end business processes.

Even after implementation, the traditional software development process shields business professionals from the processes that they understand and for which they're accountable. A major aim of a full process life cycle is to create a manufacturing process around software design, implementation, deployment, analysis, and improvement that enables disparate departments to work together cohesively on software projects. Most importantly, the life-cycle approach empowers business professionals to comprehend, analyze, and even configure their own business processes.

Currently, there are two prominent approaches in the development community to solve the problems created by traditional methods. The first is the model-driven architecture (MDA), which builds from the bottom-up and is largely based on Unified Modeling Language (UML). MDA is the focus of a substantial effort to enable software to be built, deployed, visualized, and maintained more efficiently. Much attention in MDA is directed at improving software comprehension and documentation through graphical diagrams. Because the diagrams primarily follow UML, they're generally targeted at the technical side of the enterprise.

A key MDA goal is to enable the generation of code from UML diagrams, thus ensuring that code and diagrams are synchronized when changes are made to one or the other. This synchronization is critical to speeding up development and finally producing documentation that's always current. MDA partisans intend to continue narrowing the gap between abstract requirements and the creation of executable code. In contrast to the BPM approach, MDA approaches go deeper into the technical hierarchy to assist with the creation of classes and components themselves.

BPM offers a more top-down approach. It begins with business professionals defining their needs. BPM's goal is to capture needs at the initial design stage and make them available to the rest of the development process. At a very high level, the general sequence goes like this:

  • Business professionals define needs and model business intent

  • Software developers attach implementation components to the models received from business professionals

  • IT deploys executable models.

The keys to this approach are that the original requirements requested by business professionals (the process model) are bound through implementation and deployment. This approach empowers business professionals to understand their own business processes. Because the process model and executable process itself are synchronized, documentation remains intact. This synchronization not only solves the perpetual problem of keeping documentation and code intact, but is very helpful for adhering to government regulations, such as Sarbanes-Oxley, that mandate that processes be explicit and documented.

IT and business professionals may also use the process model to run simulations. IT, for example, may check how increasing the transaction load by 20 percent will affect server performance. Business professionals could then check the impact on staffing if the increase occurred. Not only are business processes modeled but also the people and systems related to them, which allows these simulations to take everything into account during calculations.

Life-cycle Steps

This section summarizes the five steps involved in the full process life cycle:

Model. Information workers model business processes. Key issues in modeling are: Does the package possess a modeling scheme that information workers feel comfortable working with? Does it support simulation? Is it synchronized with executable processes? Moving from business-friendly modeling languages (such as SCOR) to operational languages (such as BPEL) more suitable to executable business processes is tough, which makes synchronization between model and executing process quite challenging.

Orchestrate. Here, process models are connected to IT assets, such as Web services and databases, and become executable. Information workers can at times develop simple processes directly: But, generally, this step is where IT takes over development. Remember, though, that the process model serves as a common synchronized piece of documentation. Through the process model, business users retain control and understanding of their business processes. Developers know that they're working with the "correct" specification, as they insert assets directly into the process model.

Another benefit is that orchestration tools that generally use workflow-centric languages, such as BPEL, are better equipped to handle long-running transactions, support parallel workflows, and deftly insert humans into business processes than past technologies. Thus, through orchestration, end-to-end business processes are better automated.

Deploy. This step must support versioning and rollback. As information workers take over from IT developers, deployment will need to become simpler. The smarter, more discrete processes contained in SOA will be easier to version and track, and thus deploy.

Analyze. BAM, as discussed here and in Part II of this article, enables real-time digital dashboards consisting of data from process models and other enterprise systems. BAM can also initiate alerts based on events occurring in the process model. An example would be an order that didn't ship on time to a tier-one customer. Because BAM is contextually aware of the business process through the process model, it provides a powerful source from which to send meaningful alerts.

Improve. In the full process life cycle, this step doesn't stand apart from everything else but is instead the result of an improved software manufacturing process. The effort to improve processes and systems can leverage better analysis, process visibility for both information workers and developers, and simplified means of change. Modeling, orchestration, and visibility are the enablers of continual incremental change.

The Future is Now

SOA and the related methods and technologies described throughout this series of articles have deep roots in a legacy of efforts to improve software development — many of which fell short. The great promise of the current efforts by major vendors (such as BEA, IBM, Microsoft, SAP, Tibco, WebMethods, and a host of other development concerns around the globe) is made real by urgent demand. Businesses, government agencies, and other kinds of organizations desperately need faster, more agile application development that's made more cost-effective through reuse and component growth.

Thanks to the Internet, technology standards are here and are continuing to improve to support SOA. What lies ahead should be a creative and productive period, beneficial to business and IT alike.

More SOA Standards

As I did in a sidebar accompanying Part I of this article, here are some comments about important SOA standards relevant to this concluding part of the article. Whether used together or alone, open industry standards are an essential part of what Web services and SOA can deliver for modern applications.

Transactions have proven difficult for Web services because they require a framework that supports multistep, multiparty, and multimessage operations. Early Web service standards didn't support this framework, and the usual way of ensuring transactions through ACID is also poorly suited for Web services. In a world of long-running, multiparty transactions, you can't hold the entire transaction in limbo until it completes successfully — or roll it back if it doesn't.

WS-Coordination enables a message element — called a coordination context — containing the WS Address information of the coordination service to be appended to all messages participating in the transaction. The coordinator service, a Web service described in Web Services Definition Language (WSDL), provides a mechanism to start or end a task and enable participants to check transaction status or register tasks.

WS-Atomic Transactions enable Web services to provide standard two-phase commit transactional behavior at the service layer. (Enterprises participating in the service may still have to perform compensation or versioning.)

Service Composition Business Process Execution Language (BPEL) has two primary use-case candidates: Design tools and execution engines operate on BPEL, or they operate on proprietary languages that import and export BPEL. The second scenario seems to be the most popular now. Upgrades occurring in the OASIS BPEL specification will dictate the proportion of direct BPEL use versus import and export.

The individual services in BPEL-composed Web services are inserted via their WSDL interfaces. BPEL relies on WSDL for service descriptions and WS-Policy for transactional, security, and other abilities and constraints.

BPEL structure is defined by the link between the BPEL partnerLink and the WSDL port type. BPEL information or instance data is stored in variables that are described by schemas. A BPEL-composed service is a set of choreographed activities — each tied to a variable — that form the process steps and combine to define the behavior of the business process.

- Robert Eisenberg

Resources

Earlier installments online at IntelligentEnterprise.com:

Part I: "The Future Is Now," April 17, 2004

Part II: "Leveraging the Legacy," May 1, 2004

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights