It's all in the mind.

Is this the year that clusters of Intel Corp.-based servers start populating corporate sites? Not likely, despite the pending release of Microsoft Corp.’s Wolfpack software for Windows NT.

Beginning this spring, several server and storage manufacturers will release cluster-ready systems based on Wolfpack, which will deliver high-availability features between two server nodes.

But IT managers probably won’t sink their teeth into Wolfpack until 1998. The reasons? Many sites are still kicking the tires on NT as a mission-critical operating system and certainly aren’t willing to bet their businesses on a first-generation product such as Wolfpack. Secondly, customers have other options for adding high-availability features to their network servers. And finally, many sites haven’t been able to sift through all the hype about clustering technology to decide if it’s a worthwhile investment.

“Right now we don’t know a lot about clustering and the benefits it gives us,” said Dan Hendrickson, MIS director at Pittencrieff Communications Inc., in Abilene, Texas, which has added several NT servers over the last year as it ramps up a new Internet service provider business. “We use NT primarily because the database and tools are cost-effective. But as far as clustering [goes], I’m not sure where it fits in.”

Despite customers’ hesitation, vendors are still rushing to get their Wolfpack products out the door, with their early targets being World Wide Web server sharing.

“Clustering is critical for a Web server,” said John Young, director of product marketing and business operations at Compaq Computer Corp.’s Server Products Division, in Houston. “[On the Internet], performance is not an issue, but absolute availability is.”

Survival of the fittest

But there are options other than Wolfpack for providing high availability, which reduces server downtime, and scalability, which adds more processors for more power. In many cases, these options–in both NT and Unix environments–are more established.

Digital Equipment Corp. had clustering on its VAX systems over a decade ago. Since then, Unix-based clusters from Digital, NCR Corp., Hewlett-Packard Co., Tandem Computers Inc. and IBM have emerged.

Unix clusters couple two or more servers to back each other up in the case of a hardware failure. Operating as a single system image, several machines can be managed as one. As a result, cluster-enabled applications can run across separate server nodes while the clustering software evenly distributes the processing power behind one application, a feature known as load balancing.

Wolfpack won’t get to that level of performance until the second generation debuts next year.

Also on the Unix side, ccNUMA (cache-coherent nonuniform memory access) architectures provide symmetrical multiprocessing scalability to hundreds of processors. Applications benefit from the same single-system-image and load-balancing characteristics of a cluster, but without modification to the software.

Sequent Computers Inc. and Data General Corp. have ccNUMA designs for both Unix and NT. However, the processor scalability and shared-memory design go against the grain of NT, a shared-nothing operating environment scalable only to eight processors. As a result, NUMA is making major inroads in Unix environments, leaving the door open for Wolfpack on the NT side.

But Wolfpack also faces a handful of NT failover solutions from Octopus Technologies Inc., Vinca Corp., Network Integrity Inc., Veritas Software Corp., Sequoia Systems Inc. and others.

Wolfpack is similar to these products in that they all provide a way to eliminate downtime in the event of a single server failure. But where each is dependent upon different techniques that may require specific hardware and software configurations, Wolfpack promises hardware and application independence.

Therefore, these third-party companies will have to evolve their products as Wolfpack steps into their space with standards-based technology.

Vinca, for example, plans to add back-end mirroring to the storage portion of Wolfpack clusters. This feature will provide a speed advantage and eliminate the possibility of data loss when a server is down.

Octopus plans to ship by the end of the year a fault-tolerance clustering option for Wolfpack. With this feature, an offsite cluster could act as the backup for an on-site cluster in the event that an entire network goes down.

“The reason they are helping with [Wolfpack] is that they don’t want the same vendor lock-in that the Unix customer [faces]. And they want a wide range of application availability,” said Mark Wood, Microsoft’s Wolfpack product manager.

That’s the same reason IBM is in the midst of establishing a common set of APIs for Unix clustering based on technology dubbed Phoenix. Phoenix will be available on IBM’s Intel-based PC Servers in the next few months, and “the intent is to work with Microsoft on Wolfpack,” said Bill O’Leary, an IBM spokesman in Somers, N.Y.

Wolfpack compatibility will establish a crossover between NT and Unix for heterogeneous clusters, but IBM is still hammering out the issues of how applications can talk to two sets of APIs.

IBM and developers of fault-tolerant software admit it makes better sense to comply, rather than compete, with Wolfpack because Microsoft is lining up a long list of application and hardware support.

The Redmond, Wash., company has provided its Wolfpack software development kit to more than 50 application vendors. Moreover, it is being developed on top of standards such as SCSI-2 for attaching multiple servers to a SCSI-based storage subsystem.

Vendors can opt to connect servers and subsystems via a high-speed interconnect, such as Tandem’s ServerNet. Such a connection will enable load balancing in phase one of Wolfpack, as long as the application is developed specifically for that task. Microsoft will deliver application scalability across a cluster in its SQL Server database next year, and Tandem’s ServerWare SQL will scale in April, said officials from both companies.

Application control is one area that piques IT managers’ interest, since that’s one function many fault-tolerant packages have not delivered.

Current fault-tolerant packages “protect you against hardware issues. But if clustering offers a more reliable software platform, then it will be interesting,” said David Blanchard, decision support analyst at Quaker Oats Co., in Chicago.

Leaders of the pack

The first companies to deliver a Wolfpack cluster based on servers, interconnect and storage are six system manufacturers that are working with Microsoft on the technology: Compaq, Digital, HP, IBM, NCR and Tandem. Microsoft’s seventh partner, Intel Corp., is tweaking its Pentium Pro SHV board for use in a cluster.

These OEMs promise that customers will have an upgrade path to the clustering technology that doesn’t require the purchase of a turnkey solution.

“If someone has already purchased a server configuration that we certify [for Wolfpack], then all they need to do is buy a second server and the equipment to make a cluster environment,” said Christophe Jacquet, clustering product manager at HP’s NetServer Division, in Santa Clara, Calif.

Upgrades may take the form of a clustering kit, much like Digital provided with its own NT clustering software, released last May, that had a price tag of about $1,000 per server.

With so many different clustering efforts, some customers–while sold on the technology–aren’t willing to commit yet to one solution.

“We will have full migration to NT clusters in the next 36 months, depending upon what the vendors provide. But I want to play it as safe as I possibly can,” said David Forfia, manager of information technology services for the Electric Utility Department of the City of Austin, Texas, which uses VAX VMS clusters and mirrors Microsoft’s Exchange Server on NT with Octopus technology. “Nothing I can do with clustering now on NT is going to make or break my company. I’m keeping all options open.”

Wolfpack: A Two-Phase Project

First focus is on high availability; load balancing, other performance improvements expected in second phase

In the first phase of Microsoft Corp.’s Wolfpack, the high-availability benefits of a two-node cluster is the primary focus. The second phase, still more than a year off, will go beyond the two-node barrier and start to address some of the performance potential of clustering with Windows NT.

New load-balancing features will dominate the second release, tying Wolfpack’s failover process to server utilization thresholds that can be set by the network administrator.

Clustering servers together requires dedicated, high-speed links between servers. These interconnections handle heartbeat signaling, or continuous verification of each server’s operational status. They also are used for data transfers within clusters.

In the two-node server cluster arrangement, each server acts as the failover partner of the other. If one server crashes, the second server will instinctively stand in for the lost resource until it can be restored to service.

This creates a high-availability environment that is viewed as a single computing resource. The single-system view of the server cluster makes hardware failures transparent from the client perspective.

One administrative benefit of Wolfpack is the rolling upgrade, which allows software and hardware updates to be performed on one system at a time, without a loss of service. In fact, the terms “primary” and “secondary” servers don’t apply in server clusters because all server resources are equal.

Managing the failover process requires close ties between servers and applications. The Wolfpack APIs are making clustering, traditionally a proprietary technology, into a more broadly supported approach. Using these APIs, developers have started coding cluster-aware applications that will operate on Wolfpack-compliant hardware platforms, such as disk-drive controllers and storage arrays, to be certified by Microsoft.

Since the APIs were issued in November, by the time the second phase rolls out, even more cluster-aware applications will be available.

Tags: , , ,

Post Comment

Please notice: Comments are moderated by an Admin.