Core Computing and Networks

Core Computing and Networks

Grid Computing and Computing On Demand

Grid computing involves the application of distributed computing resources for the more efficient usage of processing, storage, applications, and data and for computing on demand. It represents a trend toward the use of information technology as a utility, with resources available upon demand and businesses paying for what they consume. Grid computing has the potential to reshape the entire application service provider and application infrastructure provider market. It changes our typical assumptions about how services are delivered and which services can be made available. Grid computing is similar to peer-to-peer technologies in that it can apply distributed computing resources working together to perform complex calculations and benefit from economies of scale. The difference is that while peer-to-peer technologies typically involve desktop computers running at the so-called "edge" of the network, the grid computing concept is more typically a collection of servers. As such, it can provide greater levels of security and manageability and is closer to the traditional outsourcing model.

Grid computing started in the academic and scientific domains via initiatives such as the Global Grid Forum, but it is now becoming available and of interest and benefit to the mainstream business as well. Companies such as Compaq, IBM, Microsoft, and Sun all have initiatives aimed at the enterprise offering a return on information technology investment in the form of computing utility services and software. The advantage to the business user is that of being able to tap into additional computing resources upon demand, based upon fluctuations in usage. The business can also minimize additional investments in computing resources when step-changes occur. When a usage and capacity threshold is reached, the business can tap into the "grid" instead of purchasing additional hardware. Businesses can also make better use of their own internal computing resources by setting up their own grids in order to tap unused processing cycles and storage. One of the barriers to adoption of grid computing is that it involves the sharing of computing resources, which brings up a number of security, manageability, and trust issues. Businesses may be willing to apply the technology within their own global network infrastructure or from the network of a trusted service provider, but obviously not from an untrusted, undefined network of servers scattered across the Internet. They will also be unlikely to want to share their own computing resources with others unless they are trusted business partners or customers. Some of the advantages and disadvantages of grid computing and computing on demand will be discussed later in this section, but first it's worth reviewing the current landscape in terms of academic and scientific initiatives and the industrial offerings available.

Academic and scientific initiatives in the grid computing arena include the Global Grid Forum and the Globus Project. The Global Grid Forum (GGF) is a community of working groups that are developing standards and best practices for distributed computing. The GGF was formed in November 2000 by merging the efforts of the North American "Grid Forum," the European Grid Forum "eGRID," and the Asia-Pacific grid community. Members of the GGF include over 200 organizations from over 30 countries.

The Globus Project is a research and development project that was started in 1996 and is centered at Argonne National Laboratory, the University of Southern California, and the University of Chicago. Research is supported by DARPA, the U.S. Department of Energy, NASA, the National Science Foundation, IBM, and Microsoft. The project is focused on enabling the application of grid computing concepts to scientific and engineering computing. The Globus Toolkit is software that enables organizations to set up their own grid computing infrastructure and to allow others to access their resources. Sites within the grid environment are able to maintain control over who has access to their computing resources via site administration tools. The toolkit includes components to help manage resource allocation, set up security infrastructure, access secondary storage, and to monitor the heartbeat of application processes. The toolkit has been widely adopted by a number of technology providers such as Compaq, IBM, Microsoft, and Sun as an open standard for grid computing. The toolkit is now evolving toward a standard called the Open Grid Services Architecture (OGSA), which combines grid computing concepts with Web services concepts for interoperability.

On the industry side, Compaq's Computing On Demand strategy offers enterprise customers increased flexibility and control over the design, management, and cost of their information technology infrastructure. The program was introduced in July 2001 and allows users to scale computing power based upon their needs. The program offers both capacity on demand and access on demand. The capacity on demand offering includes pay-per-use storage management and capacity, and pay-per-use measured processor (CPU) consumption. The access on demand offering includes per seat/per month thin client access, PC access, and mobile computing access. It allows businesses to scale the amount of user access they have to centralized server-based applications such as call center applications. Current customers of the Compaq Computing On Demand program include American Express, Bank of America, Franklin Templeton Investments, and Ericsson.

IBM's offering is called e-business on demand. The aim is to make e-business as convenient as accessing traditional utilities such as water, gas, telephone, and electricity. The e-business on demand offering is divided into infrastructure on demand and business process on demand. Infrastructure on demand includes core infrastructure services such as content distribution, Internet data and document exchange, managed hosting and storage, together with management services. Business process on demand includes horizontal business services such as business intelligence, e-commerce and e-procurement, together with industry-specific services such as communications, distribution, industrial, and finance. As part of this initiative, IBM has invested over $4 billion to build more data centers worldwide.

Sun's offering in the grid computing space is comprised of a combination of hardware and software products. The main software product is Grid Engine software that provides a distributed resource management application for grid computing. It includes specialized software agents on each machine that can help to identify and deliver the available computing resources for the grid environment. Customers using the grid engine software include companies in electronic design automation, mechanical computer-aided engineering, biotechnology, and scientific research. Sun has also released the grid engine software source code into the broad open-source community via their grid engine project. In addition to the grid engine software, Sun also has a variety of software products for managing grid infrastructure, grid security, and grid systems management. Practicing the technology itself, Sun uses a 4,000 processor campus grid, with a 98 percent CPU utilization, to execute over 50,000 electronic design automation jobs a day. For most businesses, typical CPU utilizations for workstations and servers range from five to twenty percent, so this type of usage level helps to maximize existing investments in servers and helps to avoid investments in supercomputers.

One of the business scenarios to which grid computing can be applied is within a shared services IT environment. Many large corporations provide shared IT services to their operating companies. These operating companies can often number into the tens or even hundreds within a single corporation. A shared IT services environment helps to reduce costs by leveraging prepurchased enterprise software licenses, computing infrastructure, and staffing. In addition, best practices and templates for software development, deployment, and ongoing operations can be developed and shared across all operating companies. These shared services models can establish application service provider and application infrastructure provider services and offer grid computing as an additional value-added service. Computing resources such as processing time and disk space may not be completely utilized in the shared services environment via traditional hosted applications. Grid computing provides a way for the additional capacity to be tapped into and consumed upon demand, thus maximizing the use of existing IT assets and infrastructure such as servers and storage.

One of the challenges for such a shared services environment offering computing resources is to develop the usage and pricing model that will recover costs and charge the various operating companies appropriate amounts based upon their level of consumption.

In fact, this is a challenge that faces the entire computing utility movement, not just IT shared services environments. As one starts to offer per-transaction and per-usage pricing models, there is an increasing need to better track and report on such usage. Most software licensing products were developed to support the concept of one-time shrink-wrapped software product sales rather than the emerging software as a service model. In addition, usage levels will vary by customer or by operating company, so it becomes important for usage to be accurately measured and reported for fair pricing among customers.

The appropriate measurement attributes that are cost drivers will also vary by application category. For example, packaged applications typically have the number of end users as a primary cost driver. This is generally easy to report on since the number of end users can be tracked by the number of software licenses granted or the number of registered users in a database. Customers also like to see consistent and predictable pricing, and per-user pricing provides this and eliminates surprises due to variations in usage. The cost drivers for lower level infrastructure applications are typically harder-to-measure attributes such as number of servers, number of processors, and number of transactions. They may be independent of the actual number of end users and may consume a variety of resources such as processing power, disk space, and bandwidth. Once this is measured, there is still the challenge of providing customers with monthly bills that may vary greatly. While these types of bills can be considered "fair," they are often undesirable due to the difficulty of predicting charges for budgeting purposes.

Another challenge for service providers is the ability to put in place sophisticated provisioning and billing applications that can automate the provision of service, track usage, and charge for the services via a complex set of usage measurement attributes. As software becomes more like a traditional utility, software services providers will need to build the same types of operational support systems (OSS) and business support systems (BSS) that the wireless carriers currently use when delivering wireless service to their subscribers. Wireless carriers have streamlined and automated their services in order to bring on new customers with minimal cost. Software services providers offering grid computing capabilities and IT utility capabilities will need to develop similar turn-key approaches to adopting customers into their environments in order to make their ventures profitable.

The general business trend that is occurring with grid computing appears to be the increasing distribution of processing for economies of scale and the increasing granularity of services offered. Instead of tapping into application services only, businesses can now tap into IT utility services such as additional processing power, disk space, and bandwidth. The fundamental challenges for this model include trust and security as the computing grid expands and migrates toward a global network, often crossing various organizational boundaries. Businesses can benefit from on-demand service if their grid computing or IT utility is outsourced, but they will have the same management, security, and trust issues as with any other type of external provider. To be successful, service providers must provide management tools for their customers for self-administration of certain common tasks, strong service level agreements for availability and notification and response time for planned and unplanned outages, and high levels of security within their infrastructure. This is true whether the service is a third-party offering or an internal IT shared services offering.

Power Line Networking

With ever more devices and appliances such as consumer electronics becoming data enabled, one of the challenges for manufacturers, retailers, service providers, and consumers is how to provide these devices with connectivity within the home. In fact, according to the analyst group Cahners In-Stat, the home connectivity market is expected to reach $6 billion by 2004. Connectivity can enable device-to-device communications, for example, for use in gaming applications, in addition to communications with the Internet for remote monitoring purposes and data exchange. In the consumer home networking arena, many techniques currently exist for users to network their computers and appliances. In addition to traditional wired, or Ethernet, connections that operate at ten megabits per second, consumers now have options such as wireless networking via a variety of wireless standards such as 802.11b (also sometimes known as Wi-Fi, which is the certification standard for 802.11b compatibility) or Bluetooth and options such as connecting via standard telephone lines. The HomePNA standard allows devices to be networked across home telephone sockets. HomePNA is the Home Phoneline Networking Alliance, which is a nonprofit association of industry companies such as 3Com, Agere Systems, AMD, AT&T Wireless Services, Broadcom, Compaq, Conexant, Hewlett-Packard, Intel, Motorola, and 2Wire. The association was founded in June 1998, has over 150 member companies, and has released two specifications for home networking at one megabit per second and ten megabits per second via the standard RJ-11 phone jack. Their third-generation specification will target multimedia applications at 100 megabits per second. Products available for this network include preconfigured PCs, network interface cards, routers, modems, and Internet appliances.

A new alternative to Ethernet, wireless, and phone line networks, called power line networking, promises to offer a new way to connect appliances in the home using existing copper-wire infrastructure. It actually uses the electrical wiring system already installed in the home in order to provide the network connection via any electrical outlet. One of the benefits of this method is the ubiquity of connections already installed in the home, often two or more electrical outlets per room compared to just a handful of phone line connections per entire home. This option is also a more secure approach than wireless networking via 802.11b or Bluetooth, which spreads the data signal in an uncontrolled dispersion pattern to anyone within the signal radius wanting to listen in on the communications. For 802.11b, this signal can travel up to 500 feet indoors or 1,000 feet outdoors. Bluetooth is a shorter range wireless networking specification that can travel about 33 feet or 10 meters. It should be noted that many vendors such as Credant Technologies are attacking the wireless security market with products that help to ensure the confidentiality of data via access control and encryption.

One of the challenges of delivering data over the power supply is that these types of networks are very noisy and variable in terms of interference since they were not designed to carry digital data transmissions. When electrical devices are turned on and off, they can cause spikes and other noise patterns that can distort the electrical signal and the data signal that is riding along with it.

The HomePlug Powerline Alliance is a consortium of companies that is aiming to develop new standards for power line networking and has overcome many of these original technical difficulties. The Alliance is a nonprofit industry association comprised of over 90 companies that was formed in April 2000. Some of the original founding companies included 3Com, AMD, Cisco Systems, Conexant, Enikia, Intel, Intellon, Motorola, Panasonic, S3's Diamond Multimedia, RadioShack, and Texas Instruments. Many of these companies are also playing a role in the Home Phoneline Networking Alliance and are obviously hedging their bets as to which will generate the most traction in the consumer marketplace. Most of them will benefit from the adoption of either technology since they stand to sell more network interface cards and other forms of networking equipment. The HomePlug 1.0 specification provides a data rate of 14 megabits per second and supports products for gaming, consumer electronics, voice telephony, and personal computing. The group has already conducted field trials within the United States and has confirmed that the specification is ready for market rollout. The next steps will be the introduction of HomePlug-compliant products from its member companies and continued work to ensure product compatibility.

For businesses in the consumer electronics arena, the advent of power line networking and other forms of networking within the home open up new possibilities for providing service to consumers. If consumers have a cable modem or DSL router, they can connect a power line router and instantly have full Internet access across the home. Intelligent devices and appliances are then able to be accessed from remote locations. Consumers can manage home security settings and lighting settings. Retailers can offer value-added data services for their smart appliances and can troubleshoot appliances remotely and potentially eliminate on-site visits. Some potential future scenarios include refrigerators that can reorder supplies, music systems that can purchase and play music downloaded from the Internet from legal music sites, gaming systems that can purchase and install new games, and a variety of home systems such as air conditioning, electrical, and security settings that can be remotely administered. Of course, one of the challenges is that with more devices exposed on the Internet, the potential for unauthorized usage can have far more severe effects than the relatively benign email issues from viruses that we see today. One can imagine alarms being deactivated, air-conditioning systems turned off, or ovens being set on high. Any home networking solution that enables home appliances to be remotely managed via the Internet needs to have stringent controls in terms of user authentication and access control. As the home, in addition to the office, becomes increasingly connected, the need for robust security will be a key requirement in order to spur user adoption. Without adequate security, these innovations may be relegated to niche markets and may miss out on widespread consumer acceptance.

 



Business Innovation and Disruptive Technology. Harnessing the Power of Breakthrough Technology. for Competitive Advantage
Business Innovation and Disruptive Technology: Harnessing the Power of Breakthrough Technology ...for Competitive Advantage
ISBN: 0130473979
EAN: 2147483647
Year: 2002
Pages: 81

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net