Converging Networks -- Can A Tortoise Catch A Hare?

N. F. Maxemchuk

1. Introduction

Communications - The ability to accumulate, add to, and then share knowledge is what distinguishes man from other beasts -- not the opposing thumb. After all, if one clever man figures out how to use his opposing thumb to make a weapon and kill a woolly mammoth, then there is one less woolly mammoth. If, however, that same man figures out how to tell others the secret, or better still how to record it in cave paintings for future generations, then another species is extinct.

Our methods of communications have continuously become richer and more varied. However, we suddenly seem to be in a time when many methods of communicating are converging on a single, common technique - the Internet.


The sudden collapse of communications diversity may appear to be a new phenomenon, but it isn't. New communications techniques have frequently replaced a class of older techniques:

While convergence appears to reduce communications diversity, new technologies have created new, often unimagined, applications that have eventually spawned a wider variety of more effective communications networks.

During this presentation I will discuss:


2. The Internet

Unlike the earlier technologies that changed communications, the Internet is not a particular invention, but rather a framework for incorporating and using new inventions. The original objective of the Internet was to make it possible for U.S. government sponsored data networks to incorporate new technologies without discarding existing networks. Our data networks became so large, and new technologies evolved so rapidly, that it became too expensive for even the U. S. government to discard old networks as better technologies became available. In effect the Internet is a prescription for avoiding obsolescence by continuously including new technology into the network.

As long as the Internet can successfully execute its strategy it cannot be replaced by a superior technology. If there is a network with a superior technology, the Internet assimilates that technology, and the better network becomes the Internet.

For the last quarter of a century the IP protocol has been used to execute the Internet strategy. At this point I will describe the characteristics of the IP protocol that have made it well suited for the task, but toward the end of this talk I will describe a fatal flaw in the IP protocol and a specific technology that may replace IP.

Considering the changes that the ARPAnet/NSFnet/Internet has undergone, the IP protocol has performed well. In the 1970's the Arpanet had a few tens of nodes connected by links that operated at 50 Kbps. Today the Internet connects 10's of millions of computers with a wide range of link speeds up to 2.4 Gbps. The service that the network provides has evolved from sharing the processing and storage on a few dozen computers among a small community of engineers and scientists, to the variety of data that is exchanged on the world wide web by the general population. It can be argued that in its quarter century battle IP has survived a greater change in communications technology than has occurred in the previous millennium.

IP uses two strategies that have facilitated network changes:


2.1 The IP Hour Glass

The IP layer is a communications barrier. Very little information passes between the services that use the network and the transport medium. Making these two layers independent of one another has allowed each evolve without affecting the other. The result has been a more rapid evolution than would have been possible if changing one layer required changing the other.

In retrospect the separation seems obvious, but at the time that it was done it was revolutionary. Communications facilities were a precious commodity. It was standard procedure to optimize their use in any way possible. Isolating the services and transmission resulted in a more expensive network -- in the short run. In the long run, the ability to take advantage of new technologies without replacing the entire network has resulted in a less expensive solution.

Just because IP separation has worked well in the past does not mean that it will continue to work well in the future. The services that have evolved on the IP network all expected the same thing from the network -- best effort -- therefore, there wasn't much that the service needed to tell the transmission facilities. As we place different types of services on the network, it becomes more useful for each service to request quality guarantees or security levels, without upgrading the network to universally provide the highest levels. Once again we are worried about efficiency. It is questionable whether we are returning to a need to optimize the use of communications facilities, or if this need has just been masked by the similarity of the services that have used the network.

In addition to services becoming more varied and therefore having more to "tell" the transmission facilities, the transmission facilities have become more intelligent. The simple multiplexing schemes that were used in the 1970's were not able to do very much. By contrast ATM can provide a different service for every cell on a transmission link. There is much more reason to communicate with an intelligent transmission facility than a dumb facility.

IP separation was achieved by cutting off communications between the services and transmission facilities. It is time to consider new ways to make services and transmission evolve independently. This will require major changes in IP. Fortunately, the second characteristic that has made IP successful is its ability to change -- gradually.

2.2 Tunneling

IP changes experimentally. A new version of IP is introduced into a set of facilities that form islands in the Internet. The islands are connected together by tunnels. If the new IP is better than the old IP the islands expand until they encompass the entire Internet. If the new version of IP is not useful, the islands disappear.

The current version of IP is IP-v4, version 4. The version that is being actively tested is IP-v6, version 6. Obviously, not all versions of IP succeed.

An IP tunnel is created by encapsulating the IP packet that is used in one region of the network inside an IP packet that can be interpreted in the region of the network that the packet is about to traverse. The tunnel connects two islands that use the same IP protocol. A network device at the entry to the tunnel constructs a new IP packet with a header that can be interpreted by the region that the packet is entering and the original IP packet as the data. The exit of the tunnel is a network device that peels off the header to reveal the original packet and sends it on its way.

Islands and tunnels create a different way of looking at new network services. Network services evolve rather than coming into existence in their final state. Initially new services are only available across the parts of a connection that are on the islands. The resulting service is better than the network can provide without any islands, but not as good as the complete service.

The difference between the Internet strategy and a conventional network strategy is clear if we consider introducing a reservation protocol to provide quality of service for voice connections. In the circuit switched telephone network, capacity is reserved along the entire path to provide an end-to-end guarantee. In the Internet strategy we can make reservations when passing through the islands of routers that support the new protocol, but would have to pass between these islands encapsulated in best effort packets. We can't make end-to-end guarantees. However, the quality will improve as we are able to route more of the path over islands.

3. The Telephone Network

The current digital telephone network and the current IP packet network are both the product of the early 70's. The networks are different not only because of the services that they deliver, but because of the basic constraints on their design. The ARPA net was a pre-competitive technology that was funded by the U. S. government, while the digital telephone network had to deliver universal telephone service at a lower cost than the analog telephone network.

The low cost terminals on the 1970's ARPAnet were $100,000 minicomputers. The plain black telephones on the 1970's telephone network had to cost less than 9 bucks. A four order of magnitude difference!

At a time when J-K flip flops still cost about a buck a bit and nand gates cost about 25 cents apiece, the ARPA net experimented with processing intensive store-and-forward techniques that routed individual data units. At about the same time the telephone network could not justify an A/D converter in each telephone. The A/D converters had to be located in the central office, where they could be shared. 64 kbps coders were adopted, not because they were the best quality or lowest rate coders available, but because they could be implemented economically -- with relatively little processing.

The ARPA net could afford the luxury of drawing a horizontal line in the middle of its protocol stack to isolate services and transmission. The telephone network had to be vertically optimized to squeeze the last penny out of its investment. The entire digital telephone network in the United States is constructed around the magic number 64,000. The transmission rates in the multiplexing hierarchy are all multiples of 64 Kbps. Switching and services were constrained by the number of processing cycles that could be assigned to a connection in the (1/8000)th of a second between samples.

The result of vertical optimization is a least cost network. The result of doing vertical optimization well is a network that is almost impossible to change because all of the components make use of the characteristics of other components. For instance, the 64 kbps coders cannot be changed to 9 kbps without changing virtually every piece of the network.

Five changes have occurred in the last quarter of a century to now force the telephone network to move toward the Internet.

We have already mentioned the first change: The price of processing and memory has been reduced by many orders of magnitude. Processors that are more powerful than the $100,000 ARPA net interface are now used in home appliances. The memory in some of today's PC's would have cost more than the GNP of the United States in 1970. While the processing and memory in the 1970's ARPAnet was extravagant, the processing and memory in today's Internet is definitely a competitive technology.

The second change that has occurred is often overlooked by the technologists who worship Moore's Law: Communicating with computers has changed from batch processing jobs on punched cards to a rich interface that is fun to use. Computer output devices have changed from paper printers to color terminals, with more resolution than our TV sets, and high fidelity sound systems. Input devices have expanded from typewriter keyboards to a variety of click and search menus, joysticks, and voice recognition. The interactive multimedia displays of color, sound and moving pictures can convey more information, in more interesting formats than newspapers, books, TV or radio. If computers were faster and cheaper than in the 70's but still used punched cards, they wouldn't be in our homes.

The third significant change is the WEB. The WEB has provided a uniform interface to the Internet that is less frightening than most VCR's. Before the WEB, in order to retrieve information, we had to know

The WEB made the Internet accessible and drastically reduced the need for the computer jocks who guarded the addresses, passwords and storage systems for data.

The fourth change is a decrease in long distance telephone revenues. In 1984, a 3 minute business call from NY to San Francisco cost $1.75, by 1996 that same call cost $1, and today it can be made for as little as 15 cents. More dramatically, in 1927 a 3 minute call from New York to London cost more than $640, in 1995 dollars, that same call cost less than $3 in 1995, and today can be made for less than a half dollar. Decreasing cost can increase demand. A lot more New Yorkers are willing to call a friend in London for a dollar than for $600. However, once the cost is low enough that most people who want to make calls are making them, further reductions reduce revenues. Revenues for individual companies are also threatened because competition is cutting the pie into smaller pieces. In addition, there will soon be new technology threats, such as IP telephony, that will move telephone calls to other networks.

Fortunately, the same technologies that are decreasing the cost of long distance voice communications are creating a demand and expectation for new types of communications. The total amount that individuals pay for communications - voice, Internet, and cable TV - is increasing. Future services - such as those listed - are likely to result in a faster growth in the communications industry than has ever occurred. The only thing that telephone companies need to do to maintain and grow their revenues is to introduce new services more economically and more quickly than the start-ups.

The Internet is designed to separate services and facilities so that new services can be introduced quickly. Therefore, moving phone systems from a circuit switched network to an IP network provides one means for introducing new services more quickly. Is this solution is preferable to designing a new vertically integrated network?

We can definitely design a vertically integrated network that is optimized for our current technologies and a specific set of services that costs less than the more flexible Internet. However, the fifth change that has occurred in the last quarter of a century is that the rate of change has increased. Whereas the telephone network provided voice service on point-to-point circuits for more than a century, it is very unlikely that our new services will remain unchanged for anywhere near as long. Based upon our experience with the Internet, we can expect that the services we are providing will change significantly before it becomes economical to scrap our next communications network.

We have the option of designing the least cost network for today's services and another least cost network when new services have difficulty using the current network, or designing a more flexible network that can support more generations of services, although none optimally. It is likely that the more flexible network will cost less when integrated over the life of the network. This has been the experience with the Internet. However, competition in telecommunications today is a lot more cutthroat than it was in the government owned and operated network that deployed IP.

If a company designs a flexible network that will cost less than an optimal network when integrated over 15 years, but which costs significantly more for the first 5 years, it is unlikely that company will survive the first five years. In order to compete, flexible networks may have to adopt an evolutionary strategy that more closely approaches local optima. Near the end of this presentation I will discuss a strategy, active networking, that may achieve this goal.

4. Problems to Convergence

Whether we decide to change the telephone network into there are a number of technical problems that must be addressed.

4.1 Evolutionary Strategy

One of the most difficult problems is the evolutionary strategy from the current circuit switched network to a future packet network. Because of the size of the current telephone network, we cannot conceivably duplicate the coverage of the entire network at once. Because the value of a telephone increases as you can talk with more people, the new network cannot be separate from the existing network. Therefore, the new and old networks must co-exist and must interoperate.

We would like an evolutionary strategy from circuits to IP that is similar to the tunneling strategy that is used to gradually introduce new versions of IP. The strategy should allow us to introduce one or more new techniques in trial areas, islands in the telephone network, and, let the islands grow if the techniques are good.

The difference between the telephone network and an IP network is much greater than the difference between two similar versions of IP. These differences make it more difficult to implement a strategy for gradual evolution.

The telephone network continuously transmits bits on a dedicated channel, while the Internet transmits packets of bits only when needed, on a shared channel. Gateways must convert between the continuous signals on the current telephone network and packets that have a variable delay.

The telephone network sets up a path prior to transmitting data and stores information pertaining to the connection in each switch on the path while the Internet is connectionless and each packet contains all of the information that the routers on a path need to know. Connections must be made and the state maintained in the telephone switches on a path, even though the routers along parts of the same path are stateless.

Telephones have little or no processing power. Therefore, services must be implemented in the network. In the Internet services are implemented as applications in the end user's equipment. The devices that are connected to the future telephone network will be more intelligent and will support applications software. However, during the evolution the network must support telephones.

In the Internet it is the user's responsibility to maintain a service. When fewer users are interested in a service, the service becomes less useful and disappears. By contrast, there is no incentive for telephone users to request that services be removed from the network. Regulatory agencies discourage network providers from eliminating tariffed services that even small numbers of customers depend upon.

There are currently between 3000 and 3500 services supported on a number 5 ESS telephone switch. A few services, such as conventional point-to-point connections, 800 numbers, and teleconferencing, are used by a large number of customers. A few special services, such as the extra-reliable air traffic control systems or stock market tickers, are needed by very large customers. And, a few services, such as 911 numbers and political hot lines, are critical. These services must continue to be supported as the network evolves.

There are many other services that exist for historical reasons - for instance, they may have been the best way to perform a specific function with a particular technology - and would not be implemented in the same way today. As the telephone network evolves, it may be necessary to constrain the less used services in the current telephone network to islands of the old network. Alternatively, we may put hooks in the new network so that entrepreneurs can implement and sell services. Since telephone services have both network and application components, the "hooks" must include access to the processing in the network -- perhaps an active network.

Similar to services that exist for historical reasons, there are devices that are owned by a large number of customers, but will not be needed on the future network. Voice band modems should not be used to connect two computers in a digital packet network. It is more economical to place the data directly in packets. Specialized fax machines will not be needed when PC's with printers and scanners become more common. However, there will be customers who are unwilling to discard their working fax machines. There should be a significant, "short term" business for companies with servers that terminate new and old devices and convert between them.

4.2 Quality of Service

A technical problem, that is receiving considerable attention at this and other conferences on networking, involves the methods that are used to provide the quality of service expected by telephony on the Internet. The Internet is designed to support sporadic, best effort traffic. Packets experience a variable delay, dependent upon the other traffic on the network and may be discarded at congested routers. By contrast, bits that are transmitted along a path in the telephone network have a constant delay. The bit rate is guaranteed, and bits are not lost because of short term oversubscription -- except in TASI systems. When voice is packetized and sent through the Internet the quality is very poor in comparison with the telephone network.

A variety of techniques have been proposed to provide quality of service on the Internet. I am not planning on taking part in the debate on which technique is best. Instead, we should acknowledge that all of these techniques have merit.

We are concerned with a network that is evolving. Therefore, at different points in time different combinations of these techniques will be appropriate. For instance, in the current Internet, the only way to maintain voice quality over a long distance may be to aggregate voice packets in a local region then use the current circuit switched network to bypass the long haul part of the Internet. At some point in time we may no longer need to use the telephone network to bypass the Internet.

As we select QoS mechanisms we should make certain that they do not compromise our ability to evolve new mechanisms and that they provide improvement when implemented over partial paths.

5. Where is the Internet going

As our networks are trying to converge on the Internet, the Internet is not a stationary target. Entrepreneurs are expanding the types of service that the Internet provides and the Internet Engineering Task Force, the standards body for the Internet, is continuously upgrading the network protocols. Both of these groups are more likely to cause evolutionary than revolutionary changes in the network,

Disruptive changes are more likely to come from the researchers who are funded to investigate precompetitive technologies. In the United States, most academic research on networking is funded by The National Science Foundation, the NSF, and The Defense Advanced Research Projects Agency, DARPA. DARPA was responsible for conceiving the original ARPA net, and the NSF was responsible for evolving it into the commercial Internet.

At the time when the Internet was being commercialized, I was on the NSF's advisory board on networking. There was a strong commitment to create a closely aligned experimental network to act as the technology engine for the Internet.

Today there are several experimental networking initiatives. Details on these initiatives can be found on their WEB sites. A primary objective of these networks has been to determine uses for very high transmission rates. However, the commercial success of the Internet has made it very difficult for these experimental networks to stay ahead of the actual Internet. High rate lines are being made available in the Internet before they are available in the experimental networks. At present, the experimental networks are installing OC3 and OC12 links, while the commercial networks are installing OC48 and planning on OC192. The ARPAnet/NSFnet served a useful role in creating the Internet because it developed new technologies before commercial companies were willing to invest in them. The current experimental networks are not being adventurous enough to perform the same task.

The research programs that are being funded by the NSF and DARPA can be seen by visiting the directorate for computer and information sciences (CISE) at the NSF WEB site and the Information technology office at the DARPA WEB site. While the ARPAnet and NSFnet were US-centric, an increasing amount of Internet research is being done outside the US. I do not plan on surveying all of the research that may affect the direction of the Internet. Instead I will go into two topics that I feel are very likely to cause a major change in the Internet, multicast and active networks.

5.1 Multicast

Multicast can introduce entire new classes of service into the Internet. However, multicast has been around for about a dozen years. Most routers that are in the Internet can implement multicast, but many Internet Service Providers (ISP's) don't even bother to enable it. If multicast has been around for so long without having a major effect on the Internet, why should we expect it to have a major impact in the future?

The answer is applications. When I started working in the late 60's people were predicting that the volume of data communications would pass the volume of voice communications. A quarter of a century later, in the early 90's, people stopped believing the prediction. Enter the WEB -- the prediction came true.

Two things are happening on the Internet that may cause a multicast explosion. The first effects the use of multicast for real-time signals and the second for data.

Cable TV companies are starting to provide Internet access with high rate cable modems. At the same time, digital TV is becoming a reality and TV set manufacturers are putting processors in TV sets --- for picture-in-picture, split screens, and other functions. At present, mbone TV is a small, poor quality picture on a PC. It's interesting as a novelty, but it's unlikely to replace our home entertainment system. In the near future we will be able to combine the quality of cable television with the larger number of choices that multicast can make available.

On-line brokers are threatening the established stock markets. It is clear that we will soon have a 24-hour, international stock exchange, with many trading floors. Because of the large amount of data on the identical "ticker tapes" that are distributed to traders, we should use multicast rather than unicast. There are additional technical advantages to using multicast. Some of these advantages are obtained directly from the reliable multicast protocol that Jo Mei Chang and I introduced in 1984. The others by modifications of that protocol. NASA maintains a WEB site for this protocol, and as soon as the patents are filed, I'll describe the additional changes that are needed for this application.

The stock market can be implemented without new technology and is likely to be the first breakthrough for multicast. Internet customers will not allow their ISP's to turn off multicast if they need it to trade their stocks. Once multicast is widely enabled, new applications will be introduced and its use will explode.

5.2 Active Networks

Research on active networks is being funded by DARPA and the NSF. It is clear from the mission statement that active networks are intended to replace IP. It is my opinion that there are the two main reasons why active networks will succeed. The first is that our 25 year old IP philosophy is not able to deal with the current and future generations of services and transport. The second is that the rate of change of technology is now exceeding our ability to create standards, and active networks provide an alternative to standards.

Active networks have intelligent packets, called capsules. The capsules carry instructions as well as data, and the intermediate routers use the instructions to process the capsule. The strategy that an active network uses to allow services and transport to evolve is very different from the evolutionary strategy that IP has used. IP allows services and transport to change independently by not allowing them to communicate. Active networks allow transport and services to evolve independently by encouraging them to communicate in a common language that this generation and all future generations will understand.

When transport was dumb, not giving the services the ability to communicate with the transport layer was not a penalty. When all of the services were similar, not allowing them to describe their particular needs was not a penalty. Now that we are putting services with different requirements on the network, the services should describe the quality that they need. Now that processors are present in transport hardware, the transport can take instructions.

ATM is the first example of an intelligent transport that can not be properly used by IP. IP over ATM essentially turns ATM into an old fashioned bit pipe. It sends packets over the ATM network the same way they would have traversed an FDM or TDM circuit. In the mean while, ATM has the intelligence to treat every IP packet differently and provide the different qualities of service that are needed.

We could make a patch to solve the ATM problem by changing the IP standard to set aside fields in the packet to specify an ATM class of service that should be used. This fix would work for ATM, but a new fix would be needed for the next generation of transport, perhaps self-configuring WDM networks, that has different capabilities. It should be clear that all future transport will be more intelligent than our old fixed bandwidth circuits. After all if we are putting processors in $100 home appliances to improve their use, we'll definitely put processors in multi-million dollar transport systems. Not only is IP unable to make the best use of our latest transport system, it is unlikely to properly use any future transport system.

The general purpose language in an active capsule will tell the transport exactly what the capsule requires. Different transport systems might interpret the instructions differently to meet the requirement to the best of their abilities. Just like the current Internet, a network's ability to provide a new service or a new quality of service will improve as it adds components that can interpret the instructions properly. At any point of time, different network providers may be at different points in the technology transition curve and have different qualities of service for the same request. This provides a way of distinguishing between communications companies by the quality, rather than just the price, of a service.

Network devices communicate using protocols that are resident in the device. If the devices are not using the same protocol, they cannot communicate. Therefore, protocols are standardized so that devices from different vendors can interoperate.

Standardization is a time consuming, political process in which all of the parties that make or use devices that require the protocol come to consensus. Technology is advancing so rapidly that the standardization process is no longer a reasonable model for implementing protocols. For instance:

Most network devices use software to implement protocols. Currently the software resides in the device, however, these devices can be modified to accept the software from capsules. Every capsule can carry different instructions. As long as the instructions implement a correct protocol, the protocol does not have to be a standard.

As an extreme example of the use of active networks to reduce standards: We can use active networks to eliminate the standard for the structure of capsules. Instead of standardizing how the instructions, data, and end of capsule are specified, we can just agree that a capsule starts with instructions. The instructions specify the capsule, and successive capsules on the same link may have different instructions and look very different. For instance one capsule may specify the length of instruction and data areas, and the next capsule may specify that the data ends with a control sequence. The former procedure is adequate for transferring data from memory. The latter procedure may be useful for real-time samples that haven't been even been generated when we start sending the capsule.

A big difference between active networks and the way that protocols are currently implemented, is that software is injected into the network by a user, rather than being tested and installed by the network owner. A malicious user can inject software that damages the network. JAVA is an existence proof that the damage can be contained. Five years ago I would not have allowed anyone I did not know to download and run software on my computer. Today I hardly give it a second thought. Using an interpreter instead of a compiler keeps control away from the Java script. I depend upon the JAVA execution environment to limit the parts of my memory that the JAVA script can access and to guarantee that the script cannot keep control of my processor. The parts of my memory and processing that the Java script can access is the Java sandbox.

The Java execution environment and sandbox can protect the processors in the network the same way that it can protect our PC's, but in an active network we need more. The processor and memory aren't the only resources that the user is given access to. The user also has access to capacity and time -- when we consider quality of service guarantees, once we've given away time, we can't give it away again -- so time is as much a resource as memory. In order for active networks to succeed we need to extend the concepts of interpreters and sandboxes to the entire network. When we give a user the transmission facility to transmit the beginning of a packet, we must be able to wrestle the transmission facility back, even if the user never intends to end the packet.

The concept of a network sandbox is made more difficult, because we need different size sandboxes for different participants. If a buffer overflow procedure is in the code of an individual user, we cannot allow the user to discard the packets from other users, because obviously someone else's packets are less important than your own. However, if the same procedure is in the code supplied by a network provider, that provider can decide that data users can respond to loss better than voice users. There is a paper on my Web site that expands on the layers of sandboxes. The point I would like to make is that defining and implementing these sandboxes is a problem that must be solved before active networks are practical.

The language used in active networks is currently being investigated. The language may be as general as JAVA, or it may only select from a small set of communications options.

The most general active networks are pre-competitive in the same way that the ARPAnet was pre-competitive in the 1970's. We can build terabit routers and processors that can execute JAVA scripts, but we cannot build an economical, terabit router, with sufficient processing to execute a JAVA script for every packet.

The less ambitious active networks are definitely competitive. Communications protocols have options, and protocol headers specify the options. The less ambitious active networks are just protocols with option fields. Therefore, these active networks are as competitive as our current protocols -- but not much more useful.

The range of active network solutions - from competitive to pre-competitive can be used to hasten the move to active networks. If we create an evolutionary strategy from the simplest to the more complex active networks, then we can introduce a competitive active network now, and continuously upgrade it as more advanced techniques become economical. It took a quarter of a century for IP networks to become competitive. We don't need the quarter of a century before active networks start replacing the Internet.

There is a clear evolutionary strategy from IP to active networks. Active networks will appear as islands in the IP network, and active capsules will be encapsulated in IP packets to tunnel across the current infrastructure. There is also a clear evolutionary strategy between active networks. New active networks will appear as islands in old active networks, and one generation of capsule will be encapsulated in another.

The evolutionary strategy from an IP network, to an active network, to the next active network is the same as the evolutionary strategy from one generation of IP to the next. The development of active networks should follow the same experimental path that IP networks originally followed. Promising ideas should be implemented on islands. Successful ideas are those whose islands grow and unsuccessful ideas are those whose islands disappear.

The evolution from one generation of active networks to the next will be the evolution of the active networking language. We need more than a framework to deploy new languages. We also need an evolutionary strategy for a sequence of languages that range from option bits to general purpose. At the very least the sequence should guarantee that a device on a new island obtains the most useful set of services when passing through an older region, and a device in an older region does not lose capabilities when passing over a new island.

It is my opinion that the following research needs to be done before active networks are deployed.


6. Conclusion

When Soviet communism succumbed to western capitalism, and we all appeared to be on the same side, there were those who predicted the end of history. Without the competition generated by opposing ideologies mankind would exist in a stagnant Utopia, forever. It hasn't happened. It isn't in our nature to live peacefully, without change.

As all of our networks succumb to the Internet we are unlikely to enter a period of technological stagnation. While the competition between our major networks is ending, the seeds of the next technological revolution are being sown.

As communications engineers we have two jobs facing us. The first is to enable our networks to converge, so that we can purge our system of antiquated technologies. The second is to set up new technological competitions so that we can continue to spread the gap between us and our fellow beasts.