Daniel Amor
  • Book Cover
  • Internet Future Strategies
  • Summary
  • Title: Internet Future Strategies
    Author: Daniel Amor
    Publisher: Prentice Hall, New York, 2001
    ISBN: 013041803X
    Pages: 318
  • News
  • Daniel Amor spoke at the Search Engine Optimisation conference of the Ark Group about "Delivering targeted content to multilingual users".
  • » More News...
  • Web Standards
  • The following Web Standards are supported on this Web Site:
  • valid CSS logo
  • valid XHTML logo
  • Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0

Chapter 1 - The Next Chapter on the Internet

If you can imagine your doctor talking to your hearing aid or your refrigerator reordering milk, then you’re ready to embrace the next Internet revolution called pervasive computing. The concept of pervasive computing, which describes the extension of the Internet beyond PCs and servers to form a truly universal network, has been around for several years, but only recently has it begun to find its way to the mainstream.

Although some pundits claim this development signals the death of the personal computer, that demise is unlikely, considering we haven’t even moved past mainframes yet. What will start happening, however, is the expansion of the power of the Internet beyond traditional computing devices, enabling people to participate in a global network by using their mobile phones, TV sets, or refrigerators. And it will go a step further.

Pervasive computing will not only replicate the standard functionality of the Web in embedded devices, but it will also offer the services provided by such devices to other entities on the Internet. The idea is to reap the benefits of ever-broader networks without having to deal with obtuse, unwieldy technology. The first generation of embedded devices were passive, meaning that they relayed existing services to other devices, such as the TV. The second generation of embedded devices is more intelligent and can look for services on the Internet, collect them, and bundle them into “metaservices.”

1.1 Introduction

Pervasive computing describes an environment where a wide variety of devices carry out information processing tasks on behalf of users by utilizing connectivity to wide variety of networks. In a 1996 speech, Rick Belluzo, executive VP and general manager of Hewlett-Packard, compared pervasive computing to electricity, calling it “the stage when we take computing for granted. We only notice its absence, rather than its presence.” While this may be true for Bill Gates’s $53 million home, that level of pervasive technology hasn’t trickled down to the mainstream—yet. Louis V. Gerstner, Jr., of IBM once said, “Picture a day when a billion people will interact with a million e-businesses via a trillion interconnected, intelligent devices.” Pervasive computing does not just mean “computers everywhere”; it means “computers, networks, applications, and services everywhere.”

Pervasive computing has roots in many aspects of computing. In its current form, it was first articulated by Mark Weiser in 1988 (even before the introduction of the World Wide Web) at the Computer Science Lab at Xerox PARC. In his opinion, pervasive computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, pervasive computing forces the computer to live out here in the world with people. Virtual reality is primarily a horsepower problem; pervasive computing is a difficult integration of human factors, computer science, engineering, and social sciences. Weiser also calls this invisible, everywhere computing that does not live on a personal device of any sort but is in the woodwork everywhere. Its highest ideal is to make a computer so embedded, so fitting, so natural, that we use it without even thinking about it. By invisible, Weiser means that the tool does not intrude on your consciousness; you focus on the task, not the tool. Eyeglasses are a good tool: you look at the world, not the eyeglasses. Pervasive computing creates an augmented reality. It enriches objects in the real world and makes them “smart.” This allows these devices to better assist people. With additional information about the environment and the context, these devices become better tools for the people using them.

The trick is to build devices to match people’s activities that are related sets of tasks. Already you use 50 or more computers in your home. You don’t care how many there are, as long as they provide value and don’t get in your way. That’s how it should be with the computers in our lives. Computers and motors are infrastructure; they should be invisible.

When computers merge with physical things, they disappear. This is also known as invisible computing and raises, of course, new issues with the user interface, since people do not know that they are using a computer. New, intuitive user interfaces are therefore required.

Realizing such a mass market revolution will involve new types of strategic planning that will connect individual organizations from different industries into an intricate network of alliances and interest groups. Additionally, this vision requires a simplified consumer-marketing strategy that focuses on customer solutions instead of technology products. But this will not be easy; very little of our current system infrastructure will survive.

Examples of Invisible Appliances

Here are some examples of information appliances. The list includes many things you wouldn’t think of as computers, which is just the point: successful invisible computers won’t be thought of as computers. So, you won’t notice you are using lots of them.

  • ATM Machines – Money delivery through a computer network
  • Cash Registers – Calculators that are used in checkout counters
  • Navigation Systems – Direction-giving devices and maps built into cars
  • Digital Cameras – Images just like those from standard cameras
  • Electric Instruments – Electronic simulation of guitars, keyboards, drums
  • Calculators – A service to people who still use calculators even though they are sitting in front of a computer

1.1.1 Human-Centered Development

Personal computers are general-purpose devices, designed to do everything. As a result, they can’t be optimized for any individual task. Another related problem is that in the design of the PC, many choices were made intentionally to make the PC as flexible and user friendly as possible. Users have complete control over their machines and can even modify the operating system at will, just by clicking on an e-mail attachment. This model makes any real security impossible. Furthermore, it makes it hard even for experienced computer experts to fix problems. Thus, long-range ease of use has been given up in favor of short-term convenience by enabling users to modify their machines on the spur of the moment. This approach is great for rapid diffusion of the next software application, but it leads to frustration when things go wrong, as they often do. Providing stability, security, or transparency requires limiting users’ flexibility.

A tradeoff between flexibility and ease of use is unavoidable. However, there is no single tradeoff that is optimal for everyone. Donald Norman argues that the PC was aimed at the “early adopters” and that its lack of success in penetrating about half of the households in the United States is a sign of its poor design. The success of Apple’s iMac is another sign that consumers do value simplicity. The iMac requires only a power cable and a telephone cable and within seconds it is on the Internet. No fuss about configuration, it just works. Norman argues that information appliances can and should be designed for the mass market. Proper design of simple interfaces, appropriate when a restricted set of tasks is to be enabled, does make this possible. This is one reason why more mobile phones than personal computers are sold in Europe. Mobile phones are easy to use.

Look at open source software. Linux is the major rival to Microsoft Windows today. Yet it seems that the main lesson to be drawn from the success of Linux and Apache is different. These systems are built by experts for experts. There are many people (although a tiny fraction of the whole population) who know what regular expressions are and can use text commands to execute programs much faster than a graphical user interface (GUI) would let them. They also tend to be in charge of important resources such as web servers, and they appreciate (and effectively use) the flexibility that access to source code provides. Apache and Linux are ideal for them. They are not satisfied with the black-box software from commercial vendors.

These expert users do not account for a large fraction of desktop computers but do control a large share of computing budgets. They form a substantial market for computers where flexibility is dominant, even at the cost of ease of use.On the other hand, it is doubtful whether those among them who contribute to the code, as opposed to just using it, are interested in creating the easy-to-use but much less flexible interface that would appeal to a wider market. That is the province of Microsoft and Apple. Apple has always been strong on the user interface side, but now they are creating an even more powerful alternative to Windows, called Mac OS X. It uses a BSD kernel, which is similar to the core of Linux and has an easy-to-use interface, called Aqua. It is still a GUI, but built on more than 15 years of experience in this area.

To make computers invisible, designers must start a more human-centered product development, which studies the users for whom the device is intended. This could be in the field where they normally work, study, and play. Norman calls this method “rapid ethnography.” Once this study has been conducted, rapid prototyping procedures, design, mock-ups, and tests, which take hours or days, try to find out how people respond to the product idea. This process needs to be repeated until an acceptable result is arrived at.

The next step in the human-centered development is the manual, which needs be written in a short and simple manner. It should be as simple as possible. The manual and the prototypes are used as the design specs for the engineers.

The issue today is that a product usability test is done only after the product has been manufactured. It should be the other way around. The industry knows that you cannot get quality through testing. Quality must be built in at every step of the process. The result is faster production at higher quality. The same story holds for the total user experience. The total user experience is far more than usability. It is the entire relationship between the consumer and the product.

Human-centered product development is simple in concept but foreign to the minds of most technology companies. The youth of a technology is very exciting. Engineers are in charge. Customers demand more technology. There are high profits and high rates of growth. More technology is introduced, creating a collapsing market situation. After this, the technologies mature and become a commodity. They are taken for granted. Customers want value, quality, fun. Looking at the Internet, we have just left the collapse that happened in late 2000 and are now starting to see mature solutions. If you want humancentered development, you probably have to reorganize your company and create a mind-set to enable human-focused products.

1.1.2 New Class of Computing

Pervasive computing, sometimes called ubiquitous or nomadic computing, describes not only a class of computing device that doesn’t fit the form factor of the traditional personal computer, but also a set of new business models supporting these devices. Where a desktop computer uses a familiar keyboard, monitor, and mouse interaction model, pervasive computing devices interact in a variety of different ways. They may use handwriting recognition, voice processing, or imagery. They’re often portable and may or may not have a persistent network connection. A pervasive computing device is meant to integrate into your lifestyle and to extend your reach into a global network of computing, freeing you from desk-bound application interaction. With the ability to take corporate and personal processes and data with you, no matter your destination, opportunities abound for improving and enhancing your personal and professional life.

The first wave of computing, from the 1940s to the early 1980s, was dominated by many people serving one computer. In the early eighties the personal computer evolved and allowed the symbiosis between a single person and a computer. The third wave was introduced with the invention of the World Wide Web in the early nineties. Suddenly, the single person could connect to many other computers and users over the global network, the Internet. The fourth wave, which we are seeing on the horizon, extends the paradigm of the third wave, allowing any device, not just computers, to connect to the global Internet and introduces the paradigm of automated services that serve the users, instead of users serving themselves on the Internet, everywhere in the world.

History of Computing

In the history of computing we are about to move to a fourthgeneration of computing. Over time, cost and size of computers has reduced significantly to allow more people to participate in the world of computing.

  • Mainframe Computing – Many people share one large computer
  • Personal Computing – One person works with one little computer
  • Internet Computing – One person uses many services on a worldwide network
  • Pervasive Computing – Many devices serve many people in a personalized way on a global network

Picture the sales representative who undocks a portable device each morning before heading out on his route. As he travels his territory, he’s able to transact with his customers either in real time or cached for his eventual return. Imagine finishing the last of the milk in the carton and casually swiping the bar code across a reader mounted on the refrigerator. The next time you enter the grocery store to shop, your PalmPC reminds you that you need milk or, better yet, if you don’t find a trip to the market to be therapeutic, your household point-of-presence server simply forwards the milk request to the grocer and the required milk is delivered in the next shipment to your home!

While there’s no doubt that pervasive computing will be a major part of the technological revolution in the 21st century, we have to ask ourselves whether or not it really benefits society. Many people believe that “pervasive” is just another word for invasive and that it comes at the price of our privacy. Ubiquitous computing affords us the ability to get information anytime, anywhere, but it also increases the risk that centralized personal information will be used without the owner’s consent.

There are certainly advantages to having all of your personal data and the equivalent of hundreds of web search engines available on an embedded chip controlled by your voice and an invisible heads-up display. Sure, it sounds a little like the Jetsons, but so did an Internet appliance in the palm of your hand as recently as two or three years ago.

1.1.3 The Tech Elite Duke it Out

The race is on to create the standard for the next generation of the Internet, and as often happens with such efforts in their infancy, companies will compete to establish their own vision of the universal network. Sun Microsystems with its Jini technology is probably the best-known promoter of the universal network vision. But many other well-known companies have started to create similar technologies and incorporate the idea of pervasive computing into their corporate visions. Besides Sun, Hewlett-Packard, IBM, Lucent Technologies, and Microsoft are developing such technologies.

But lip service by the tech heavyweights isn’t necessarily a precursor to widespread adoption. The tactical goal of championing these new technologies is often to make the companies developing the architecture appear innovative and to drive sales of more traditional products such as operating systems, servers, and printers. Though still in its formative stages, the concept of the universal network is now out of the bag, and strategically, the resulting technologies will open up a complete new world on the Internet for businesses of all sizes.

1.1.4 Business in a Brave New World

Pervasive computing is knocking on the door of today’s economy. To sustain a profitable business in the future, you will need to change many business cases. Selling books to a fridge or sending information about software updates to a car will be easy through pervasive computing, but not successful. People using a fridge expect certain services from it, such as a list of food items inside, maybe a selection of shops in walking distance with good prices, and some recipes to make for dinner. But only a few people will go to the fridge to learn more about the latest thriller by Tom Clancy.

The same applies to all other devices that can now be connected to the Internet. People use them for a certain reason: to increase business and the value of the device; to offer services that support the device or the use of the device. Bringing “traditional” e-commerce and e-business applications to these devices will be easy, but commercially most probably a failure.

Pervasive computing will get at least 10 timesmore people onto the Internet than there are now, so it is tempting to offer the same services and products to these users through this new and alternative channel. Very few devices really offer the same type of functionality we are used to from today’s personal computer. In Europe, WAP-technology has moved Internet functionality to mobile phones. But as you can imagine, doing business with your mobile is quite a challenge. Instead of viewing just a product category, a product detailed page, and a payment page, you will have to go through at least 25 pages on your mobile phone. Your mobile phone cannot display a large amount of information at one time, and slow download times make things worse.

1.1.5 Creation of a New Paradigm

A new paradigm is necessary if we are to create a business targeted to mobile phone users. Any company should first look at the current use of a device before starting with a business plan. In our case, our potential customers use the mobile phone for calling people, storing addresses and phone numbers, playing small games, and looking up calendar appointments, dates, and times. To make money, we could supply services and products to complement the existing services. The call functionality is already supported by the built-in address book, but we could offer an online address book of the whole world. People on business in another town often have difficulty locating an address, so we could have the mobile phone support them with directions. Since mobile phones contain small games, such as Tetris, Snake, and Memory, we could offer new games for download. And to keep a calendar up-to-date, we could synchronize it automatically with the user’s existing calendar on his PC or server at work.

These are small services that can enhance the value of the phone, and people would be willing to pay a small amount of money for them. Although we can charge only a few cents per transaction, there is a good chance to become rich soon. The lower the transaction fee, the more likely people are to use the feature. Five cents for a phone and address lookup should not bother many people, and maybe 50 cents for directions to a certain address won’t really bother a businessperson. Since the number of users is 10 times higher on mobile phones than on the Internet, we could offer a certain service at a much lower price.

We should now examine the functionality we have built for the mobile phone business to see how we can use it for other devices. All the functionality can, of course, be offered on the Web. People with older mobile phones would therefore be able to use the service. But let’s not stop here. Other devices may also be in need of such services. For example, a driver could use the direction and location service, so the car should have access to it. Instead of receiving the information through a map, the driver needs the same information in spoken form. A voice should tell her to drive left at the next crossing and so forth. This means that the data should be stored in a device-independent format to obtain the maximum revenue from the service.

As you can see, every device is an aid for human beings; to make devices more valuable, it is necessary to offer additional services around them. Depending on the situation, the service, product, or information needs to be presented in a different manner in order to support people in the best possible way.

1.2 The Internet Today

(R)Evolutions are a way of life in the computer industry. Only 20 years ago, the computer world was dominated by mainframe systems. Only a few people had access to computers, and these computers were used for calculations in large corporations. Individuals did not possess computers. The personal computer in the early eighties and the GUI in the mid-eighties changed all that, thus giving computer access to millions of people. This turned the computer into a mass-market commodity. From there it was only a small step to the Internet.

Today, more than 350 million people worldwide use the Internet. According to International Data Corp., more than a quarter of a trillion dollars’ worth of business will be transacted over the Internet this year. But today’s Internet is very much restricted. Although many people believe that the Internet has opened up a whole new world, it has only created a single window into this new universe. In most cases, you need a personal computer to connect to a server, and you need a web browser to browse through the World Wide Web. Companies have set up web pages to allow customers to serve themselves, reducing the load and the cost to the company. For companies going online, the benefit is clear: less direct customer interaction, higher-quality orders, and fewer problems with orders because there is no media “middleman.” These factors drive down the cost for every sale and increase the profit for the company.

This is not only true for B2C web sites, but also for B2B and business-toemployee (B2E) web sites. People accessing the services need to specify exactly what they want. They need to provide a set of information and type it into the browser window. Communication is reduced from a human-to-human interaction to a human-to-computer interaction without effectively reducing the workload. The only thing that has happened is a shift of work from the business to the partner, employee, or customer. There are other advantages for the web client, of course. The company, its services, and its products have become accessible 24 hours a day, the prices have been driven down due to the market transparency, and new competitors have created an even more dynamic market place.

1.2.1 Internalized Outsourcing

Most online companies today are forced to build their entire offering virtually from scratch. Even if they buy software solutions, they have to provide all the services themselves. Amazon.com, for example, provides the service of selling books to its customers. All services required to do that, such as inventory management, distribution, billing, and web store management have been implemented and operated by Amazon.com, making their web site proprietary, massive, and costly. Although it is not part of their core business, these services need to be implemented, maintained, and operated by the online retailer. Enter pervasive computing.

A universal network will allow the next generation of online retailers to outsource these services to inventory management, billing, distribution, and web store management solution providers, which will provide these services at a lower price and a higher quality. Right now companies can do this, but they lose control over vital business functions. Pervasive computing will tightly integrate these service providers and ensure centralized control.

1.2.2 Subdivision

For the outsourcing of Internet services to become feasible, every service needs to be able to communicate with the other. The concept of service then becomes more abstract, since it is made up of a series of smaller functions. The service of billing could be further subdivided into several simpler services. One service could be the bill handling. Bills are typically printed on the retailer’s printer and then sent to the customer. To reduce costs, the bill could also be printed at a local billing office, or if the customer’s printers are directly connected to the Web, the bill could also be printed at the customer site. Costs could be further reduced if the bill is entered directly into the Enterprise Resource Planning (ERP) system of the company and paid automatically. But it can’t stop there.

For this new paradigm to work quickly and efficiently, all levels of service need to be integrated. A new layer must be added on top of the existing Internet layers to enable services to accept other services and to connect and create new metaservices, or simply broadcast their availability to the network. Next-generation Internet startups will concentrate even more on their core business and buy the use of building blocks whenever they need them. Instead of setting up a payment server, they will rent a payment service; instead of having to buy new hardware for peak usage, they will rent network and CPU capacity from a service provider.

1.2.3 Expensive and Complex Hardware

The Internet today has the problem that a rather complex computer needs to be used to access it. Although connecting a computer to the Internet has become easier, many people are still lost if a problem occurs, because they don’t understand the underlying technology of computer and networking hardware.

To make the Internet more accessible to more people, we not only need devices that are much easier to use and configure than a computer, but we also need to change the whole paradigm of how the Internet works. We are already seeing the first postcomputer Internet generation that uses more mobile phones than computers to send e-mail, chat, and search for information. For example, once the plane has landed, the mobile phone will reconnect to send out the e-mails.

Television sets from Loewe are Internet enabled. Mobile phones from Nokia are WAP enabled. But even before the WAP era it was possible to surf the Web with a mobile phone. You could either connect a mobile phone to your laptop or use the Nokia Communicator to write e-mail and surf the Web. There is also no need to buy a new television to connect to the Web. For years you have been able to buy so-called set-top boxes to add the Internet functionality to your television set. This addition allows you to browse the Web without having to know how to install a browser or update the operating system.

Using a television or a mobile phone to access the Internet is not the same as using a web browser. Later, when we discuss WAP in more detail, both from a technological and a business point of view, you will understand better why this emerging technology has not lived up to its expectations. But even if WAP were the perfect technology, it would not remove some of the elementary problems of the Web, because WAP is only a technology to access the old Internet infrastructure.

These innovations over the past years did not change things a lot, because the existing functionality of a personal computer was introduced into new devices without respecting the limitations these devices had. The concept of pervasive computing does not stop with transferring standard Internet functionality to new devices, but also allows the creation of new applications and services. Turning on a washing machine, checking prices at a gas station, locating a plane en route could become possible through the Internet. Of course, this does not mean that everyone should be allowed to access all information and services through the Internet. It is important to realize that the pervasive computing ideal of any service to any device over any network is a statement of enablement; it does not mean that every service will be made available to every type of device over every type of network.

The owner of the washing machine should be the only one to switch it on or off. The prices of petrol should be visible to everyone, but only the owner of the petrol station should be allowed to reorder petrol or change the prices. The same applies to the services and information that are provided by an airplane. Everyone should be able to check whether a certain plane is late, but nobody except the pilot should be able to fly the plane. New security models and measures are therefore necessary to implement pervasive computing technologies.

There is still plenty of room for improvement. Despite bountiful bandwidth, information is still locked up in centralized databases, with “gatekeepers” controlling access. Users must rely on the web server to perform every operation, just like the old timesharing model. Web sites are isolated islands and cannot communicate with each other on a user’s behalf in any meaningful way. Today’s Web does little more than simply serve up individual pages to individual users—pages that mostly present HTML “pictures” of data, but not the data itself (at present, making both available is too technically demanding for most web sites). And the browser is in many respects a glorified read-only dumb terminal; you can easily browse information, but it is difficult to edit, analyze, or manipulate (i.e., all the things knowledge workers actually need to do with it). Personalization consists of redundantly entering and giving up control of your personal information to every site you visit. You have to adapt to the technology, instead of the technology adapting to you.

Another major inhibitor to creating worldwide e-business web sites is the multitude of interfaces that are often incompatible, making it impossible to share information and services across a computer, mobile phone, and car, for example. Even if the interfaces seem to be compatible, the different devices provide varying levels of data access, meaning that there is a difference between the information you receive and visualize in the car and that at home. In some cases it does make sense to present the same data in different manners, but the amount and format of data should not vary from device to device. A car should be able to receive the same data as your computer at the same detail level. On your computer, you might be presented with a map and a textual description of how to get to a certain street. In the car, the information should be presented by voice, meaning that the car computer reads the textual information to make driving safer.

Things become even worse right now if you not only read data but also use different devices for data input. Many people already struggle because they have different calendars, one on their computer, one on their mobile phone, and a hard-copy one. They need to make sure that all meetings are recorded in all three calendars. In the future, all devices will be able to synchronize themselves without manual interaction. Okay, today you can synchronize your palmtop with your laptop, but you need to install additional software and connect them via a cable. In the future, no cables and additional software will be required.

Once the technology has been put in place, personalized “information spaces” on the Internet can be created. These repositories would contain all information about a certain person, a certain process, or a certain company. The information will be most likely organized in an object-oriented database whereby objects and attributes will have security settings allowing others to view them or not. This would mean that consumers don’t have to reenter information on every e-business web page, but once they decide to buy something, the site will get access to the required data. And not only web sites will have access, but other devices will be able to retrieve data whenever they need it. These devices could be owned by that particular person, such as her mobile phone, his PDA, or any other mobile device. But it could also be other devices, such as a scanner at the airport that does an iris scan to check identity.

Most web sites today focus on fancy graphics and a strong marketing message, but very few can support business and commerce transactions, and those that do, often do so inadequately. The process is not often developed by a itself (at present, making both available is too technically demanding for most web sites). And the browser is in many respects a glorified read-only dumb terminal; you can easily browse information, but it is difficult to edit, analyze, or manipulate (i.e., all the things knowledge workers actually need to do with it). Personalization consists of redundantly entering and giving up control of your personal information to every site you visit. You have to adapt to the technology, instead of the technology adapting to you.

Another major inhibitor to creating worldwide e-business web sites is the multitude of interfaces that are often incompatible, making it impossible to share information and services across a computer, mobile phone, and car, for example. Even if the interfaces seem to be compatible, the different devices provide varying levels of data access, meaning that there is a difference between the information you receive and visualize in the car and that at home. In some cases it does make sense to present the same data in different manners, but the amount and format of data should not vary from device to device. A car should be able to receive the same data as your computer at the same detail level. On your computer, you might be presented with a map and a textual description of how to get to a certain street. In the car, the information should be presented by voice, meaning that the car computer reads the textual information to make driving safer.

Things become even worse right now if you not only read data but also use different devices for data input. Many people already struggle because they have different calendars, one on their computer, one on their mobile phone, and a hard-copy one. They need to make sure that all meetings are recorded in all three calendars. In the future, all devices will be able to synchronize themselves without manual interaction. Okay, today you can synchronize your palmtop with your laptop, but you need to install additional software and connect them via a cable. In the future, no cables and additional software will be required.

Once the technology has been put in place, personalized “information spaces” on the Internet can be created. These repositories would contain all information about a certain person, a certain process, or a certain company. The information will be most likely organized in an object-oriented database whereby objects and attributes will have security settings allowing others to view them or not. This would mean that consumers don’t have to reenter information on every e-business web page, but once they decide to buy something, the site will get access to the required data. And not only web sites will have access, but other devices will be able to retrieve data whenever they need it. These devices could be owned by that particular person, such as her mobile phone, his PDA, or any other mobile device. But it could also be other devices, such as a scanner at the airport that does an iris scan to check identity.

Most web sites today focus on fancy graphics and a strong marketing message, but very few can support business and commerce transactions, and those that do, often do so inadequately. The process is not often developed by a business unit, but by the IT department, which normally has no contact with customers and partners and therefore does not know their requirements very well. Developers of web sites face another problem: the interfaces to existing systems and to partner and customer online businesses. No system today lets developers write code for a particular system and deploy it to a variety of devices. Java technology comes very close, but it means that all software components need to be rewritten in the Java programming language to be supported; many companies are not willing to spend time, money, and effort for a software porting project if they have used a particular software solution for years.

Therefore, a new paradigmand vision are needed—ones that address all the issues raised here and that provide the foundation for the next generation of electronic businesses. Several technologies, paradigms, and visions have been developed. The book describes them so you can choose the right one for your business case. In the end, business is all that matters.

There are more reasons to create a pervasive computing vision of the future. Consider the following example. Today, people can use mobile devices and connect to a range of devices, but in most cases special knowledge is required. Physically connecting a PDA to a computer is usually simple. The cabling is standardized, and even unexperienced users will be able to plug in the cables. The problem arises as soon as they want to transfer data from one device to the other. If they use the proposed configuration of the PDA, the transfer will work, but most people configure their computer to their needs, with tools and programs that best fit their requirements without regard to the PDA. The PDA is not prepared to download data from any calendar, it is not able to communicate with every operating system, and it is not ready to use all existing services. Today, most standard applications, such as e-mail and calendar, work, but there is no guarantee that they will work. People with more in-depth technical knowledge are able to install additional drivers and configure them properly to work with the system, but the technical layman will not be able to use the network.

Therefore, the existing Internet technologies, communication protocols, and interface designs are unable to handle a heterogenous network properly. The existing paradigm does not scale with increasing numbers of services and computing devices. The knowledge and time required of users today will increase dramatically as more software services and computing resources become available, and do not scale with increasing user mobility. Manual configuration costs time. If a user remains in a computing environment for only 15 minutes, he does not want to spend the first 10 minutes restoring computing contexts manually. The existing infrastructure also does not tolerate change or failure of computing resources. The services available in an environment change in the presence of mobile devices such as laptops and I/O devices. Software services can be installed or removed without users’ knowledge. Proximity-based networking (such as IrDA and Bluetooth) leads to dynamically changing net works. Failures of services and networks can change the availability of computing resources. Manual configuration is simply too costly in this setting of transient services, networks, and devices.

Today’s Computing Model

Today’s computing model is targeted toward the individual using a single device and can therefore be characterized by the following assumptions:

  • Desktop computing – People typically sit in front of a desktop PC to do their work.
  • Stationary devices and software – People tend to have total control over the few devices they use, including software and hardware configuration.
  • Monolithic applications – Most applications are designed to interact with humans instead of with other applications.
  • Manual mapping – Computing tasks are mapped manually to applications. Users need to know which application is capable of what.
  • Single device computing – Users typically only use one device at a time.
  • Manual configuration – Users are responsible for configuring applications themselves and keep a single configuration regardless of the environment.

1.2.4 Current Restrictions

Fundamental problems prevent current technology from becoming pervasive. Today, people typically sit in front of a desktop PC to do their work. This means that there is a one-to-one relationship between the device and the human. In the future, people are more likely to use a variety of different devices that need to be configured on-the-fly as the person using it wants it to be. Even if people use more than one device today, they typically use stationary devices such as desktop workstations and a couple of mobile devices (laptops, PDAs) and they do their computing primarily on those devices. People have complete control over those devices and how they are configured. People buy software and install it on their personal devices and compute primarily with that software. If they move with their laptops from one environment to another one, they need to reconfigure the laptop to work in the new environment. Network connection, printers, and online services need to be reconfigured manually.

Today’s applications are designed to interact with humans only. This means that many companies invest a lot of money in the creation of GUIs that are designed to keep the human using the software happy, but most applications are unable to communicate with other applications in a heterogenous network. CORBA and E-Speak provide the ability to offer services and information across applications and modules, but only a few applications support these paradigms today—and if they do, they support only one of them, making them available to only very few other applications.

Another problem with today’s computer environment is that the user needs to know which application performs which task. Users must keep track of how to use these applications and configure them to carry out each task that they want to do. The future needs to provide a single repository where users can locate all services available to them at a certain time, at a certain location, and with a certain device. Only then is the universal network really in place.

Computers and networks are isolated silos. Communication between systems is complex and inefficient. To extend an application or technology from one computer or network to another requires additional intervention, whether it is a file conversion or a complete systems integration.

Programs are tied directly to the operating system, which is tied to specific hardware. This prevents the user from accessing any given program from any hardware device—a prerequisite for pervasiveness.

Today’s users are also not accustomed to using several devices to complete a task. Everything needs to be installed on one system to be accepted by the user. In the future, the user must understand that it may be more convenient to access a service that involves several devices at once. The user should not know which device does what in this setup. It should be a transparent service, and the underlying components should be managed by the service. Single-device computing should become a paradigm of the past. A new user interface is therefore necessary to control and use the new networking service environment. This interface will also obviate the need for manual configuration that drives users crazy. Today users are responsible for configuring applications; in the future, a service will configure the application to users’ current needs.

1.3 New Internet Technologies

As new devices are connected to the Internet, the number of users will explode. As a result, the number of business opportunities will increase rapidly. Pervasive computing technologies do more than connect these devices to the Internet. Not only can users view content through their WAP phone, but domestic devices, consumer electronics, and other devices, such as cars and planes, can access specialized services and provide services to other devices and users. Pervasive computing creates a universal network that will include business models such as m-commerce and home networks. It is the basis for the next generation of the Internet.

More and more devices contain computer chips and pervade every fabric of life. The concepts are not new; in the early nineties, many companies created visions of this future. The extension of the existing Internet is also often called the universal network.

A few years ago, Sun, Oracle, and a few others developed the network computer, which basically consists of a screen, a keyboard, and some memory. Applications and processing power were requested over the network as needed. Even files were saved on the Internet. By implemention of this paradigm, the existing hardware could be replaced by a newer generation that would be much cheaper and much easier to administer and configure. These technical tasks would be done by a system administrator who took care of hundreds of computers. If you look at the underlying paradigm, you can easily see it as a predecessor to the universal network.

1.3.1 Application Service Providers

The vision of the network computer was regarded as the end of the personal computer, but flaws in the concept prevented the network computer from really taking off. However, several ideas have been enhanced and introduced into the universal network. The basic idea of connecting less powerful devices to the Internet has been extended to noncomputer devices, such as mobile phones. More and more applications are now available through application service providers (ASP). One of the first applications, e-mail, has now become one of the most used applications on the Web, and it is now also possible to use Microsoft Word or SAP R/3 over the Web. By using a service instead of installing the software, the user does not have to pay a fee before using the software and does not need to buy the necessary infrastructure. In the ASP model, billing is done according to the pay-per-use model. The more often users connect to the ASP and use its applications, the more they have to pay.

The advantage is that the software and the infrastructure are controlled in a central environment; as soon as the software is updated, every user of the service can use it without having to install new versions of the software locally. As more and more nontechnical people use computers, this paradigm becomes more important. For complex software such as SAP R/3, the cost for the infrastructure can be very high, meaning that a company has to invest several million dollars or euros into the software, hardware, and network. Installation of a new version is also a major hassle. By outsourcing these applications, companies can focus on their core competencies. Today’s ASPs are not yet compatible with the universal network, but they are the first step toward a truly networked business environment.

The major problem of ASPs today is to make the software network-friendly. Most software written today is not ready for the ASP model. Microsoft Word and SAP R/3, for example, need additional software wrappers to allow remote use of the software. The new technologies and programming paradigms will help to resolve this issue.

The vision of pervasive computing to interconnect all people by a globally integrated, ubiquitous network promises greater empowerment for the individual. Yet realizing such a vision requires the implementation of new technologies that, up until now, were merely visionary. Recent advancements in pervasive computing technologies promise the arrival of a new era in ubiquitous computing, marked by the emergence of high-speed, multilayer, in-home networks that will integrate traditional home automation and control technologies with real-time, media-rich applications like voice and video conferencing. Most importantly, these new technologies also involve new deployment strategies that will bring broadband internetworking applications to the domestic mass market.

Pervasive computing gives us tools to manage information easily. Information is the new currency of the global economy. We increasingly rely on the electronic creation, storage, and transmittal of personal, financial, and other confidential information, and we demand the highest security for all these transactions. We require complete access to time-sensitive data, regardless of physical location. We expect devices—PDAs, mobile phones, office PCs, and home entertainment systems—to access that information and work together in one seamless, integrated system. Pervasive computing can help us manage information quickly, efficiently, and effortlessly.

Pervasive computing aims to enable people to accomplish an increasing number of personal and professional transactions by using a new class of intelligent and portable devices. It gives people convenient access to relevant information stored on powerful networks, allowing them to easily take action anytime, anywhere.

These new intelligent appliances, or “smart devices,” are embedded with microprocessors that allow users to plug in to intelligent networks and gain direct, simple, and secure access to both relevant information and services. These devices are as simple to use as calculators, telephones, or kitchen toasters. Pervasive computing simplifies life by combining open-standards-based applications with everyday activities. It removes the complexity of new technologies, enables people to be more efficient in their work, and leaves more leisure time. Computing is no longer a discrete activity bound to a desktop; pervasive computing is fast becoming a part of everyday life.

Pervasive Computing Summary

Pervasive computing means many things to many people. Here is a short definition of all it encompasses.

  • Invisible devices – Numerous, casually accessible, often invisible computing devices
  • Embeddedmicrochips – Microchip intelligence embedded into everyday devices and objects
  • Always on – Access to information, entertainment, and communication with anyone, anytime, anywhere
  • Ubiquitous network – Everyone and everything connected to an increasingly ubiquitous network structure
  • Life-enhancing applications – Invisible penetration of technology into the mainstream mass market through a variety of life-enhancing applications
  • Consumer-centric solutions – Device “gadgetry” for simple and practical consumer-centric solutions
  • Increasing productivity – Mainstream market value propositions: Saving time, saving money, enhancing leisure and entertainment
  • Long-term vision – Using technology in ways that empower people to work, live, and play more effectively

1.3.2 Wireless Networks

Simplicity can only be maintained if devices are connected in a wireless mode. That way you need not plug and unplug the devices as you move into a new location or introduce new components to a local area network (LAN). As long as only a few people use nomadic devices in a controlled environment, it is acceptable that they configure their devices themselves and plug/unplug them, but imagine hundreds of thousands of people in a city moving around using local services, switching context and using different services. If these multitudes needed to plug in every time, the time to set up the environment would take longer than using the service. With wireless technologies, it is easy to use an existing installed local infrastructure.

Two kinds of wireless networking are required. One kind provides longrange connections via cellular phones or satellite connections to connect to the Internet or special service providers that are not locally available. The other kind provides local, short-range connections to give access to local services; this can be achieved by Bluetooth or wireless LAN (WLAN) connections. An overview of the existing technologies is given in Chapter 4. A wireless network is, therefore, the basis for a pervasive computing architecture.

Although many people will carry around devices, they will not want to do so all the time. Many devices will be installed permanently in a certain location and can be accessed and used by many people for varying services. To allow access to these devices, a dynamic ownership needs to be implemented to allow the use of wired infrastructure and the seamless integration of the wireless and wired world.

Dynamic ownership means that the devices require a login for all people using it. Once a person logs in, the system will be configured to the needs of the user. This approach requires a centralized database with the profile of the user that can be accessed by all devices.

1.3.3 Framework for the Universal Network

To promote new services on the Internet, standards organizations must establish new standards that are as common and accepted as HTML for the Web. These standards include protocols and interfaces for the access of online services. The standards must allow the description and virtualization of services in order to allow access to services and information not yet available in digital form. Only if the framework supports this paradigm is it possible to create a large set of services in a short time.

A service should be viewed as an object on the Internet. At the moment, almost everyone agrees that objects should be described in XML, but different organizations have different approaches on how to describe these objects. Oasis, RosettaNet, and BizTalk try to describe services, products, and information related to vertical and horizontal markets, to make them more easily comparable and usable. As long as the different organizations fail to create a single standard, it will be difficult to use all services on the Internet. XML allows the easy transformation of data from one format to the other, but if the granularity of the information is different, a manual process needs to be put in place to add missing details. For example, one standard could describe a personal computer as follows: 999 MHz processor, 128 MB RAM, 32 GB hard disk. The other standard would describe it as follows: About 1 GHz Pentium 4 processor, 128 MB DIMM RAMs, 32 GB IBM hard disk. Although both descriptions are structured, it is almost impossible to convert the first into the second because essential information is missing. Fortunately, the organizations mentioned understand the problem and are cooperating to circumvent these problems.

Components for the Universal Network

Four components are necessary to pervasive computing:

  • Universal cataloging system – Users should be able to use any computer to find any program that suits their needs.
  • Universal application platform – When users click on a file, a program should launch regardless of where it might be stored on the Internet and on what type of device.
  • Universal file management – Users should be able to use any computer to not only access their own files but also to access any files which they have permission to view or access.
  • Universal payment system – Users should have a set method for measuring what they use and specifying how they should pay for it.

1.3.4 Metaservices

By virtualizing applications and services, companies can build up new applications and services that are composed of several virtual applications and services, thus reducing the cost of development and implementation. New applications and services can be created on-the-fly and for a certain purpose only. Virtualizing the services and applications enables them to reconfigure themselves to work together seamlessly.

Support for these new metaservices on the universal network comes from support for the processes that create the metaservice. These processes are also being defined in new standards. One of these standards, called “Job Description Format,” describes all processes in the print industry. Even the most complex processes will be described in this standard. Once all processes are documented in a simple way, everyone can review the stage a certain process is in, independently of the services and products used to implement the process. Only if standardization can be achieved can metaservices on the Internet be controlled and guaranteed.

What I call metaservices is today known as a web site. Companies rely heavily on standard applications and interfaces between the different applications to create a complete business model on the Web. Using the concept of pervasive computing, companies can set up a virtual company with virtual organizations. Instead of implementing all components themselves, companies can tie in virtual processes, business models, and organizations. This allows companies to concentrate on their core competencies and think about innovative add-ons to their core business model. Companies like Amazon.com and Yahoo! have spent millions of dollars on components that are not part of their core competencies.

The business of Amazon.com is to sell products. To provide the basic business model, Amazon.com built up a complex infrastructure to support the business model. All services that are connected to the selling of books had to be implemented by Amazon.com. This undertaking built up a massive complexity in hardware, software, and services. Amazon.com has to implement and manage this complexity, making the whole site complex, proprietary, and expensive. The web pages show only a small part of a complete company and its processes. To sell products, a company must set up a logistics service, an enterprise resource planning service, call center service, web-hosting service, and many other things a customer does not care about.

Many companies that enter the Internet world suffer because some components are missing. Manufacturers trying to target consumers, for example, often have problems supporting the needs of the consumers because the existing logistics services can handle the shipment of thousands of products, but not of single goods. Manufactures also often lack a call center for consumers and receive thousands of calls and e-mail instead of a few from dealers. Startup companies have even more trouble, since they have no departments at all. Through virtual departments, it is easier to set up a business without having to invest a lot into a business model that may fail after a short while. This gives new Internet businesses a better standing and also reassures the investors that their money is well spent.

1.3.5 Security Requirements

A new level of security will be required to support the universal network. Whereas access to the Web is easily secured through login and password, secured access to the universal network is more complex. Multiple access rights must be created for every object. Information and service objects must be set in context to provide different views of information and different aspects of services. Only then can the service be automated.

Simple login and password procedures will be inadequate in a universal network. Many devices won’t have a keyboard—they will have an eye-scanner, a voice recognition system, or a smart card reader—but the universal network security method must be independent of how users or services are identified. Further, the security method must be able to identify the context in which a device is used. Context awareness becomes vital for every device, which we will see later in this chapter.

1.3.6 Operational Module

To support security needs, a central operational module is needed to provide the basic services for the pervasive computing platform. This operational module should be able to search for, broker, and execute services. To minimize delay, it should be relatively near to the person or device requesting a service. The person or device should be able to connect to another pervasive computing platform if the nearest one is not available.

Each of these platforms contains a directory of services available to the person or device in a certain situation. These services can be combined to form new metaservices, and the status of these services and their processes can be tracked. That way, the module can guarantee the quality of service of the network, and can predict when a certain result can be obtained and what needs to be done to optimize the result. The module should also be able to identify participants and restrict access to personal and other sensitive information to a certain group of people.

Virtualizing service objects makes it possible to use instances of a certain service in all sorts of different contexts without the need to recompile or reconfigure the service component. It also ensures that the service is implementation-independent and that a change in the service component will not affect the existing functionality that is broadcast to the Net world.

1.3.7 Virtualization of Applications and Information

The first steps toward pervasive computing have already been taken. The Web has virtualized processes and applications. Instead of using applications through their own user interface, the applications use the Web. Instead of usin an e-mail client, many people use a web browser; instead of using a SAP GUI, many people use a web front to SAP. The advantage is that they are no longer restricted to a certain location or a certain installation. More and more applications use the Web to present and display processes and information.

When a service is moved to the Web, any device that contains a web browser can access the service. People can access their e-mail from anywhere in the world without having to carry around a local computer or a laptop. The application is not bound to a certain hardware installation or software configuration. A web browser is all users need.

Another problem will be solved in the near future: the distribution and access of files. Today, most data is on a local system, such as a portable or desktop computer. This data is only accessible to a person or application service that is near this data. “Near” means that the data is on the same hard disk, in the same room, or on the same network segment. Putting data on the Internet today puts it at risk of being available to anyone. As more and more applications become pervasive and reside on the Internet, the next logical move is to put the data on the Internet as well, so users can access their applications and files from anywhere.

fusionOne has developed a technology that allows users to manage their private data on the Internet. The technology allows users to collect information on a private network and provide it in a directory on the Internet to privileged persons. Business people who are traveling can access their data on their private network by connecting to the web site of fusionOne (see Figure 1.1) and entering a login and password. Private networks typically belong to companies or to families that are normally not accessible from the outside. With fusionOne’s software, called Internet Sync, secure access to data on private networks is possible. Moreover, the risk of having several incompatible versions of the same document is reduced; with Internet Sync, users always have the most up-to-date version of a file, no matter where they are. The software is so far only a point solution, but it is easy to imagine a future version that allows controlled access to information relative to a certain person or situation.

Many other applications—such as Gnutella and Napster—that deal with data distribution are already available. Both have been developed for MP3 files but the technology allows swapping of any type of file and could be used for business transactions if additional security were included.

Although file sharing on a network is a nice service, it does not handle transformation of the information contained in the files. It does not help to share a Word document across a heterogenous network if some of the devices are not capable of viewing the information. To use applications and files on any type of device, the programming paradigm must be changed to recognize the abilities of each device. These changes are described in Chapter 4 in detail. Only if these new standards are put in place can pervasive computing components without too much overhead be written. Only if you can virtualize information and services, can you use them on any device and participate in the universal network.

By virtualizing products, information, and services on the Internet, more companies can do business on the Internet. They don’t even have to be completely online themselves. It is enough to have a virtual representation on the Internet that is managed by someone else. Although this approach is not optimum, it does allow all companies to take part in the new economy with little cost. Pervasive computing combines online services with offline services to create new, more powerful services for everyone in the online and offline world. Pervasive computing allows a new generation of Internet services, such as information, products, and service brokers that transparently handle both online and offline businesses.

Pervasive computing not only provides these services to the general public, but also allows only certain people to use a dedicated service. It is also possible to allow the automatic communication between certain devices. A car, for example, could check, on behalf of the driver, where the nearest and cheapest gas station can be found. The heater could participate in an energy auction to buy as much energy as possible for a certain amount of money.

To make pervasive computing successful, many small tasks must be done automatically so that users can concentrate on the real tasks. Achievement of this goal will mean an augmentation of life.

1.3.8 A New Business Platform

The introduction of pervasive computing technologies will create a new business-to-business platform that will bring a higher transparency to the market. Only with pervasive computing is it possible to compare all offers without missing one. It is also possible to combine service offerings fromdifferent companies and pick the best parts of each offer to create a new offer.

B2B exchanges are becoming more popular, where companies provide offers for products and services in response to requests from other companies. Today this process is still manual because someone has to create a request for proposal and another person has to answer it. The first thing we will see in the future is automatic answers; later, we will see the on-demand creation of requests for proposals whenever a need for a certain good or service arises in production. Through pervasive computing, many processes that have been moved from the physical world to the Internet world will be automated in the future.

The technologies to support this automation are described in Chapter 4 in detail; which of them will succeed cannot be said at the moment, but it can only be a technology that can communicate with other pervasive computing technologies. No company in the world has such a market dominance that it can decide which technology will be the only one. E-Speak from Hewlett-Packard, for example, communicates with the Jini technology developed by Sun. Only if all technologies that are applied are able to communicate with each other will a truly universal network be born.

1.3.9 New Interfaces

Today, working across online and offline environments—even when using only a single laptop—can be a frustrating and inefficient experience. The world becomes more disintegrated. Applications that we use daily, such as web browsing, text editing, graphical design software, and communication technologies, require different software platforms with different functionalities. Although many services are already available through a web browser, each web service has its own user interface and its own formats. Most people would prefer a single, unified environment that adapts to whichever environment they are working in and that moves transparently between local and remote services and applications. Making an environment device-independent transmutes it to a sort of universal canvas for the Internet Age, as Microsoft calls it in its .NET vision.

A set of new interfaces makes the use of digital services much easier in the future. To make the interaction with the net-enabled devices as seamless as possible and to hide as much of their technology as possible, a set of natural interfaces will help a person use the devices as fast as possible without having to concentrate on the technology. Only this approach allows the vision of invisible computing. These interfaces will move away from “traditional” interfaces, such as keyboards and mouse devices, and move toward speech, vision, handwriting, and natural-language input technologies. More and more new devices will provide one of these interfaces, some of them in combination. The natural interface provides the right user experience for every device or environment.

Providing natural interfaces is not enough. All devices need to share a unified environment to enable users to interact with information in a unified way, no matter which device they are using. Therefore, it is necessary to create a compound information architecture that integrates all types of services into a single environment, making it easy to switch contexts, services, physical environments, and devices. Such an architecture creates a universal canvas from which users read and write information and use services from any device. This universality also allows a seamless view of information that may be distributed around the world.

Multiple ways of identification ensure the correct view of services and information. Once you have identified yourself, you need to provide your profile to the service you want to use. Therefore, a virtual representation of yourself is required to manage the personal interaction with digital services. These information agents can manage your identity and persona over the Internet and provide greater control of how Internet services interact with you. They should maintain your history, context, and preferences; basically, they should store your past, present, and future on the Internet in a secure way. With privacy support from agents, your personal information remains under your control and you decide which service can access it. This allows you to create your personal preferences just once, which you can then permit any digital service to use.

1.3.10 Context Awareness

As more and more devices with Internet access become available, their size is being reduced. Therefore, the interfaces to input and output information and use services become more irritating and boring to use. The user must enter a lot of information into each device to use the service. To reduce the amount of information is, therefore, one of the most pressing problems in making devices more user friendly. Part of the solution is known as context awareness. By means of hardware sensors and machine learning technologies, devices can detect the context of the user and adapt their behavior accordingly. The sensors detect what the environment is like. Mobile phones are already able to recognize whether they are used at home or outside. VIAG Interkom provides its Genion mobile phone tariffs with a special service: if the mobile phone is used at the customer’s home, the caller pays normal tariff; if used outside, the caller pays the higher mobile phone tariff. All mobile phone providers use the same system on a greater scale when crossing country borders, allowing the callers to use a roaming service by connecting them to the foreign mobile phone network.

Appliances that know more about their environment will be able to function better and will give their users a better, more personalized service. A device that knows about its own environment and that of its user could transparently adapt to the situation, leading to the realization of the invisible computer. To improve interaction with such a device, its context awareness must be augmented. The appliance will be able to give better defaults for the situation and could automatically make choices that the user normally would have to make, thus reducing the amount of time to access a service.

Components of Context Awareness

To make devices aware of their context, the following components need to implemented. They need to answer the following questions:

  • Activity – What does the user want to do?
  • Environment – Where is the user currently?
  • Self – Which status has the device?

The context-related information can be used to control incoming and outgoing information to and from the device and to set device controls. This could have consequences for the use of these devices. A washing machine could check the time of day and not start itself until the local power supplier reduces its prices for energy. A mobile phone may know whether to ring urgently or buzz subtly, depending on the environment—a meeting room or a beach—of the mobile phone owner. A PDA may know to immediately initiate a network connection or to wait until a connection is cheaper and more reliable, depending on whether the user needs to download information or send e-mail. A laptop can know to switch to low-power mode because its user is engaged in a phone conversation across the room or to check one more time for e-mail before a plane takes off so that the owner can read and respond to the e-mail while airborne.

Context awareness is not a new concept. Many appliances already use sensors to find out what is happening in their environment. Today, however, only a few sensors are used, and the recognition process is still very simple, based on very little input. Establishing a high-level notion of context, based on the output of a group of simple sensors, is not very common. The field of robotics started with this approach and has probably been responsible for most of the progress that has been made so far. The context awareness in robotics is still expensive and slow. In most cases, the robots are also fixed in a certain environment. In the future, devices need to adapt to a changing environment quickly and, even more importantly, cheaply. In the era of pervasive computing, context awareness is mandatory.

To make devices aware of their context, developers must implement the following components: activity, environment, and self. The activity component describes the task the user is performing at the moment, or, more generally, what his or her behavior is. This aspect of context is focused on the users of the device and their habits. This component creates the personalization feature of the device. The environment component describes the status of the physical and social surroundings of the user. It takes into account the current location, the activities in the environment, and other external properties, like temperature or humidity. With the addition of this functionality, the device can set correct default values for its service and reduce the time for the user to use the service. Finally, the self component contains the status of the device itself. It indicates which capabilities are available for which person in a given environment. A device could give a set of services to person A in the house, but a different set of services while traveling. The same device could also provide a totally different set of services to person B regardless of its environment. When these components are onboard every device, they become personal and usable for anyone, anywhere.

1.4 New Internet Business Models

The universal network will provide a new chance for startups to play an important role. In Chapters 2 and 3 you will see some startups that already play a significant role in home automation networks and m-commerce. But the complexity and scope of the pervasive computing market means that it is difficult to single out any one type of player within any particular supply chain as having an advantage over any other. So, let us have a look at the different players in this new market segment (see Figure 1.2).

We will certainly see that six players in the market will make a lot of money. First, the device manufacturers: if they have a good story to tell, people will buy new devices. Infrastructure providers and network operators will be the next to quickly benefit from the new opportunities—even more than they did with the traditional Internet since the number of participants is much higher on the universal network. The large software infrastructure vendors that have the most expertise in providing the necessary solutions will realize the technological opportunities. They will have to bring their solutions to market through partnerships with brand owners. But technology alone will not enable pervasive computing.

Once the infrastructure, the devices, and the network are up and running and the new business ideas are accepted by the consumers, the following three players will be able to generate money in the market: Content providers, service providers, and content aggregators will offer information, products, and services through the universal network to the consumers and businesses. Service providers that can attain “critical mass” will have the potential to take up positions of strength by acting as brokers of service agreements and other commercial relationships and by specializing content, applications, and services for particular usage contexts (combinations of users’ identities and preferences, user roles, and delivery channels). Branding strength lies in the hands of device manufacturers, service providers, and content aggregators because they are most likely known by the users and therefore able to drive the market forces.

Once the market is up and running, a new value chain (see Figure 1.3) will be in place. The device manufacturers will create handsets and other types of terminals for the users. Accessories manufacturers will provide new types of add-ons, such as headsets, car kits, PC cards, and much more to supplement the offerings of the device manufacturers.

The wholesaler manages the distribution of handsets and accessories and provides the intermediate service between manufacturer and distributor. The distributor then moves the devices to the end users via physical shops or electronic distribution channels.

On the other hand, software suppliers will write new software for the devices and the servers to support the business models of the companies in this emerging market. By making the software device independent, the whole concept of pervasiveness is established.

The network manufacturer will have to develop new network components to support the extended infrastructure of the universal network. Infrastructure operators will buy the network components to operate a network infrastructure on behalf of a licensee, which may also own and build infrastructure and sites. Existing network operators will enter the market and try to retain their leading role in the networking segment.

The third segment of the market, service (see Figure 1.4), will be dominated by the service providers that sell network services on behalf of the network operators. They typically operate billing systems and customer services and collect a margin for doing so.

Content providers work in the background and collect, edit, and create information to serve content aggregators that provide menu services and protocol conversions to enable access to content. The content is packaged and passed on to a regular service provider or a value-added service provider. The difference between regular service providers and value-added service providers is that the value-added service provider has a billing relationship with the subscriber that is independent of the network operator.

1.4.1 Device Manufacturers

The main opportunity for device manufacturers brought about by pervasive computing is for higher volume mass-market sales and the introduction of new devices and surrounding services. With the correct marketing and partnerships in place and the greater availability and accessibility of content, applications and services will make new and existing devices more appealing to purchase. By introducing next-generation devices, manufacturers can attract customers that refused to buy the old device for some reason.

Device manufacturers will no longer be able to sell hardware products in the future. If they want to remain an important player, they need to move away from promoting hardware and start to create value-added services for devices. The device manufacturers can easily use their established brands to move into this arena. Services could include a recipe database for the oven, a direct link to the next grocery store fromthe refrigerator, or an interactive TV information site from the television set.

Device manufacturers should not try to provide a single device as the solution for all problems. People like to have a device per problem, especially if the solutions provided are hardware dependent. People do not like to rely on a single piece of hardware, where if one of the functionalities is broken the whole device and all the other included functionalities need to be seen for repair. The devices should solve a simple question and should be even easier to use. Devices like VCRs that are difficult to program will not be salable in the future. Car stereos that have five buttons and 25 functions will also not be salable anymore. The devices of the future do not need any configuration; people buy them and are instantly able to use them, just like mobile phones, for example.

1.4.2 Infrastructure Providers

The universal network requires infrastructure. Infrastructure providers create the necessary devices and services to allow the connection of all devices that are enabled for pervasive computing services. The current Internet infrastructure based on routers, switches, and servers can be used for most of the universal networks needs, but new technologies are needed to support the increasing requirements of the new network. This means that the current network infrastructure must be replaced by new and more devices to support traditional Internet and universal network services.

The desire of content providers and content aggregators to offer their products and services to customers through multiple channels means that pervasive computing brings huge potential opportunities to infrastructure providers of all kinds. Infrastructure providers will have the opportunity to sell existing and new products into multiple industries that may have been closed to them in the past. For example, as the mobile telecommunications industry gears up to offer increasingly sophisticated digital content and services to subscribers, operators and service providers need sophisticated platforms that will enable them to make content available over wireless data network services.

The Internet infrastructure is already changing. It is, for example, moving from IPv4 to IPv6. This means that every device on earth can have its own Internet Protocol (IP) address. With IPv4, the number of devices with their own IP address was limited. ATM backbones allow the creation of service level agreements (SLA) on quality of service (QoS) on the Internet. This means that video and audio streams can be viewed over the Internet in good and consistent quality. Today, the quality of video streaming over the Internet is variable. New services will have other requirements from the architecture and infrastructure of the network. Infrastructure providers therefore need to create an open environment that allows users to plug in new functionality and services whenever required. In Chapter 4 we see several steps toward a unified infrastructure for a universal network.

1.4.3 Network Operators

Network operators provide data traffic to service and content providers and connect them to their customers. The most obvious benefit for network operators from pervasive computing will be the significant increase in demand for data traffic generated by two things. First, people will access content, applications, and services within a wider range of usage contexts. More people will access them through more diverse devices. Second, and even more important to network operators in the future, will be services and devices whose communication is initiated automatically by one of the participating services or devices. That one service or device handles context-switching and configures the other services and applications so that all devices can access crucial information without human interaction. More participating services will soon become available.

With pervasive computing technologies, network operators will be able to obtain more user-relevant information. They will know who uses what, when, and where. If permitted by the customer, they can leverage their customer information databases in order to cross- and up-sell new products and services. Selling targeted advertising space is also an option for the future. All mobile phone users in a certain area of the city could be informed about a price reduction in a nearby supermarket, for example, or all Madonna fans could be notified by their CD-player about a ticket auction for a concert in the area. Context-relevant information will become the new currency on the universal network.

1.4.4 Service Providers and Content Aggregators

Pervasive computing will erode current notions of subscriber segmentation. If people can access your service wherever, whenever, and however they like, it becomes very hard to tell whether a person is acting as a business user, for example, or as an individual consumer. On the one hand, you are able to offer your current service to more people around the world in more situations than ever before. On the other hand, your services need to be even more tailored than before to the needs of the individual user since the context of the usage can vary a lot.

Pervasive computing allows service providers and content aggregators to take advantage of this erosion and offer packages of content, applications, and services that can be tailored both to particular delivery channels and to content consumers, depending on the context in which they are consuming the content. Context-awareness services will become a major service that will provide the right flavor of service to a particular person. This key value-added service will greatly enhance content aggregators’ value proposition. Content, therefore, must be provided in a device-independent way that allows users to maximize the potential service offering without being disturbed by conversion of content or unusual handling of a device. Pervasive computing technology only then becomes truly invisible.

Like network operators, service providers and content aggregators will also be able to leverage their customer databases to cross- and up-sell new products and services and to sell advertising space on their “portals.” Forget today’s portals with their personalization capabilities. Compared to the future, today’s portals look like the good old printed Yellow Pages. Portals today require customers to come back to one place on the Internet to find a host of information and services. Next-generation portals will follow the customer around and offer a set of services based on the needs and the context of the user.

1.4.5 Content Providers

Content providers are highly interested in reusing their content as often as possible for as little incremental cost as possible. This inexpensive reuse can only be achieved if the content is created in a sophisticated and high-quality manner. Pervasiveness makes possible the reuse of a certain type of content in several contexts. Pervasive computing promises content providers multiple new routes to existing customers and the possibility of reaching entirely new customer bases.

Imagine the content type of an “account statement” from your bank. With the use of pervasive computing technologies, this content type can be provided through multiple channels. Content providers can not only reach larger audiences but can reach them more often. Today, many people view their account statements online with their computer. In the future, they will be able to review it through a large set of devices such as the mobile phone. This allows them to make decisions faster and more correctly because they can review all necessary information whenever they need it. If someone wanted to buy a car, she could verify the balance of an account and compare the price of the car to the current market value. Such audience expansion possibilities also offer content providers the opportunity to cross- and up-sell other products and services to customers. If more money than expected is available on that particular account, the purchaser might buy a better car.

1.4.6 “Regular” Businesses

Once the infrastructure has been set up, regular businesses will be able to profit from pervasive computing. Any company providing information and services will be able to participate in the universal network regardless of size, location, or business. The Internet today already provides a great enhancement, especially for small-to-medium enterprises. In the near future, through pervasive computing, all companies with a network connection will be able to provide services, target customers, and deliver value to new value chains.

For many companies, pervasive computing will mean that new competitors will show up, new business models will appear, and new products and services will become available. This sounds very much like what happened with the economy a few years ago when the Internet started to boom. The next boom will create another extraordinary situation to cope with. Don’t forget what you learned during the first Internet revolution. Flexibility will remain a key. Businesses will have to adapt even more to their customers, because customers won’t use a single standard interface called a browser. They will have an unlimited number of ways to communicate with your company.

Customer relationship management (CRM) therefore becomes even more important. While it is important to open new channels to your customer, at the same time you must centralize all efforts around CRM to make sure that all interaction with the customer is recorded. Only then are you able to provide the perfect trading environment because you can anticipate the needs of the clients even better. Customers become even more valuable than they are today. Knowledge about your customer will influence the value of your shares. The more you know, the better the company will be valued.

1.5 Concerns

Pervasive computing means change. And change means resistance. Many people are disturbed about change, concerned about its effects. Pervasive computing can only be successful if change is managed in a proper way. People need to understand the value of pervasive computing and how their concerns are treated. Change also means a loss of power to the currently powerful. Change redistributes power; this is something welcomed by the powerless and feared by the powerful.

Part of the change process therefore needs to deal with the concerns of the people involved. Technology can only be successful if people are willing to use it. So, in this section I present a set of concerns that may arise when pervasive computing technologies are introduced.

Treat concerns seriously. One of the biggest problems with e-business on the Internet was that concerns weren’t addressed at all. The Internet hype made many believe that everything is possible and that the new technologies will make everything better. This is one of the biggest mistakes you can make. Technology does not do anything better. It just amplifies existing processes. If the process is bad, technology will make it worse.

1.5.1 Strength of Traditional Links

Existing supply chains will not necessarily embrace the idea of pervasive computing. Pervasive computing means that new intermediaries will be able to play a significant role. This means that a next round of disintermediation is about to happen. E-business already created a first round of disintermediation, but pervasive computing will be much more profound because content, for example, is channel independent. Content created for web browsers will be reused for television, radio, and newspapers. A web company can suddenly produce a newspaper or run a television station without too much trouble.

Many television and radio stations will therefore try to prevent others from entering the market. But as long as these current industry barriers are not torn down, these markets will remain closed stovepipe channels that other types of content and service provider will find expensive to access.

Solutions to this problem will break up many traditional industrial barriers. More and more companies will provide services and solutions in a crossindustrial manner while still addressing the personal needs of the single industry.

1.5.2 Privacy and Security

Through their pervasiveness, these new technologies can collect a lot of data about the users and their habits. Most people are concerned that this data will be misused by the companies collecting the information. To make the most out of pervasive computing technologies, information must be collected and shared among partners. Certain information about customers’ online behavior, preferences, and so on will have to be shared between supply chains either explicitly or implicitly, to allow customers to navigate seamlessly around their information and services “universe.” Without technology that can reassure customers about service providers’ adherence to privacy policies, regulators may inhibit the sharing of such information. Many people will also be reluctant to share information with services they do not trust, so a trustworthy relationship must be built up. To ensure the privacy of the users, it may be useful to set up a trustworthy third party that collects all information on behalf of the services users want.

Pervasive computing will increase the number of service providers so significantly that the control of information flow between users and the service providers becomes impossible. A trusted third party could store personal information and provide the required information to preselected partners.

Security in general is also a sensitive area. On the Internet, people are afraid of giving away their credit card information. In a universal network, their houses and cars may be at risk if information is not properly secured. Application-level security in all networks must become at least as sophisticated as that available through the Internet to adequately address users’ concerns. At the same time, these security barriers need to be easy to use. Today’s firewall technologies are too complex for most people to configure. Future firewall technologies should be as easy to turn on as locking the front door of the house.

1.5.3 Piracy

Part of the reason that supply chains are currently so “closed” is that media owners are paranoid about unauthorized copying and distribution of their assets. Television channels, for example, buy broadcasting rights for a certain country. A universal network makes it difficult to restrict broadcasting rights to a certain country.

The Olympic games, for example, cannot be broadcast live over the Internet at least until 2008. The International Olympic Committee (IOC) says that the Internet does not restrict the viewers to a certain region. And a major asset of the IOC is that it sells rights to certain countries and makes money selling the same content to different regions. An additional problem with advanced Internet technologies is that once you have converted the content, in this case the games, you can not redistribute the content via various channels, such as television sets, DVDs, radio, mobile phones, books, web sites, and so forth. Redistribution is not only cheap but, more importantly, very easy.

Another example is the MP3 hype, which makes record labels and some artists crazy, because suddenly they are no longer part of the supply chain and are unable to earn revenues. Well, at least not in the way they used to make money. Most record companies find it unthinkable to redesign their business models. But to survive, they need to.

Media owners, without significant advances in digital rights management technology and changes in their business models, will themselves inhibit their growth into multichannel media distribution.

Several companies work on solutions, but so far these are only point solutions. Some companies, such as DigiMarc, work on solutions for adding digital watermarks to images. A digital watermark can be implemented in two ways. One way is to add invisible information to a picture: the images can retain their original format and can be viewed by appropriate viewing software. A JPEG image will be slightly modified but will remain a JPEG image. The advantage is that no additional software is required by the viewer. The major disadvantage is that the watermark can be removed rather easily in most cases; by resizing, for example. The other way is to create a format that can only be viewed with special software. This method ensures that nobody can remove the watermark, but makes it more difficult to use the content, since people need to have the right software installed on the right platform. In a universal network, this solution is not acceptable.

The Secure Digital Music Initiative (SDMI) is a new format for music. It was created by the music and content industry to ensure that music distributed over the Internet can only be viewed by people who have paid for it. This is a nice idea, but as long as MP3 is out there, nobody will care about SDMI because it adds complexity—nobody will go for more complex solutions they have to pay for. This idea already failed once, when the banking industry tried to introduce the SET (Secure Electronic Transactions) standard. It failed because it created a lot of overhead and was more expensive. Here is the rule of thumb: The more people you want to reach, the easier and cheaper the solution has to be. Therefore, the music industry has to accept that MP3 is available and cannot be replaced simply by another format. To make money, the music industry needs to create value-added services.

Another company is working on creating links between users and their region. InfoSplit (see Figure 1.5) is using a new technology to establish a relationship between a user and a geographic location. This technology would allow businesses to sell content to geographic regions. But of course, the technology is far from perfect. Right now it is fairly easy to fool this technology by faking IP addresses. But it can be expected that this technology will make advances and become more secure.

These are only point solutions and will not work perfectly in a universal network. A new standard to tag device-independent content needs to be created; instead of creating technology to identify the region of the user, industries need to rethink the whole concept of rights management. Just mapping the existing processes to the new realities is not enough. This is a lesson that many people did not learn from the e-business (r)evolution. Just replicating existing processes and ideas to a new technological platform is not enough. If the platform creates new possibilities, the processes and ideas need to be extended or rethought.

1.5.4 Disregard of Technology Standards

Software standards have not always been a success. Especially in highly public areas, the best proposed standard does not always become the most pervasive. Some technologies have successfully been widely adopted across the corporate information technology (IT) world. IP is a good example of such a standard. However, IP is still far from being universally supported in the telecommunications industry. And there are only a handful of other nonhardware standards that are models of successful standardization. Even if a given standard has been adopted by most participants, a big problem can arise if some companies create their own “flavors” of the standard, making interconnection of different components difficult.

The problem is that when competitive differentiation can be obtained, vendors will always try to circumvent standardization processes. Vendors may pay lip service to standards by partially implementing them; but in many cases they will also implement features that are against the spirit of the standard or that offer “richer” or “advanced” alternatives. Particularly in the hype-fuelled software industry, vendors have historically been keen to do precisely that. Just look at the HTML standard and its very similar versions. Unfortunately they are not identical and therefore are sometimes problematic in implementation.

Unless one infrastructure vendor maneuvers itself into a position of colossal strength, wide adoption of standard technologies will be the only cost-effective route to pervasive computing. History would seem to indicate that such a situation would not be easy to engineer. True pervasive computing will be independent of certain standards. Pervasive computing would mean that different technologies and software standards will be able to connect to each other flawlessly. Bridging standards will play an even bigger role than they do today. And it seems that XML will play an important role, too.

1.5.5 Capabilities of Hardware and Battery

Mobile devices often are not connected all the time to a power plug; today we already experience this with laptops, mobile phones, and PDAs. My first mobile phone would last for a day (a Motorola in 1995); my new Nokia operates a week without recharging. My first laptop (a good old HP Omnibook 5700) allowed me to work an hour without recharging; the new Apple iBook allows me to work up to eight hours without recharging.

Personal mobile devices present the most complex technological challenges to vendors as potential platforms for accessing content, applications, and services. Mobile devices without connection to power or network outlets are becoming ever more important in an always faster-paced world. Therefore, their batteries must become smaller and more powerful.

However, the current state of battery technology is such that “advanced” features like sophisticated user interfaces and audio playback severely impact the mobility of compact devices. Perhaps more seriously, the other problem that hardware manufacturers currently face is that the hardware required to drive a 3G-capable wireless handset is currently too power hungry and inefficient to fit into a production device. Improvements will undoubtedly be made; some innovative technologies are in the pipeline, but the speed with which they come to market will have a profound effect on the suitability of mobile devices as pervasive computing access terminals.

Another big problem is the power-hungry Intel computer chip. Compared to a few years ago Intel has made improvements, but for today’s needs they may not be enough. Several companies are working on improvements. Probably the best-known company without a product was Transmeta in 1999. It was famous because of one employee—Linus Torwalds, who headed the Linux development. In 2000 it produced its first set of products, a new chip called Crusoe, which emulates all of the major chip designs in software and saves battery power by running very efficiently. Some companies, such as Sony, are shipping laptops with the Crusoe chip; other companies have created stylish new devices for connecting to the Internet. Acer, for example, has created a web pad that has a 50-meter wireless range and up to eight hours of battery life.

In the future we will see further enhancements in battery life and better, more powerful, yet less power-consuming CPUs. They will enable instant Internet access anytime and anywhere.