Maximum Security:

A Hacker's Guide to Protecting Your Internet Site and Network

Previous chapterNext chapterContents


7

Birth of a Network: The Internet

Readers already familiar with the Internet's early development may wish to bypass this little slice of history. The story has been told many times.

Our setting is the early 1960s: 1962, to be exact. Jack Kennedy was in the White House, the Beatles had just recorded their first hit single (Love Me Do), and Christa Speck, a knock-out brunette from Germany, made Playmate of the Year. Most Americans were enjoying an era of prosperity. Elsewhere, however, Communism was spreading, and with it came weapons of terrible destruction.

In anticipation of impending atomic disaster, The United States Air Force charged a small group of researchers with a formidable task: creating a communication network that could survive a nuclear attack. Their concept was revolutionary: a network that had no centralized control. If 1 (or 10, or 100) of its nodes were destroyed, the system would continue to run. In essence, this network (designed exclusively for military use) would survive the apocalypse itself (even if we didn't).

The individual largely responsible for the creation of the Internet is Paul Baran. In 1962, Baran worked at RAND Corporation, the think tank charged with developing this concept. Baran's vision involved a network constructed much like a fishnet. In his now-famous memorandum titled On Distributed Communications: I. Introduction to Distributed Communications Network, Baran explained:

The centralized network is obviously vulnerable as destruction of a single central node destroys communication between the end stations. In practice, a mixture of star and mesh components is used to form communications networks. Such a network is sometimes called a `decentralized' network, because complete reliance upon a single point is not always required.


Cross Reference: The RAND Corporation has generously made this memorandum and the report delivered by Baran available via the World Wide Web. The documents can be found at http://www.rand.org/publications/electronic/.

Baran's model was complex. His presentation covered every aspect of the proposed network, including routing conventions. For example, data would travel along the network by whatever channels were available at that precise moment. In essence, the data would dynamically determine its own path at each step of the journey. If it encountered some sort of problem at one crossroads of the Net, the data would find an alternate route. Baran's proposed design provided for all sorts of contingencies. For instance, a network node would only accept a message if that node had adequate space available to store it. Equally, if a data message determined that all nodes were currently unavailable (the all lines busy scenario), the message would wait at the current node until a data path became available. In this way, the network would provide intelligent data transport. Baran also detailed other aspects of the network, including


In essence, Baran eloquently articulated the birth of a network in painstaking detail. Unfortunately, however, his ideas were ahead of their time. The Pentagon had little faith in such radical concepts. Baran delivered to defense officials an 11-volume report that was promptly shelved.

The Pentagon's shortsightedness delayed the birth of the Internet, but not by much. By 1965, the push was on again. Funding was allocated for the development of a decentralized computer network, and in 1969, that network became a reality. That system was called ARPANET.

As networks go, ARPANET was pretty basic, not even closely resembling the Internet of today. Its topology consisted of links between machines at four academic institutions (Stanford Research Institute, the University of Utah, the University of California at Los Angeles, and the University of California at Santa Barbara).

One of those machines was a DEC PDP-10. Only those more mature readers will remember this model. These are massive, ancient beasts, now more useful as furniture than computing devices. I mention the PDP-10 here to briefly recount another legend in computer history (one that many of you have never heard). By taking this detour, I hope to give you a frame of reference from which to measure how incredibly long ago this was in computer history.

It was at roughly that time that a Seattle, Washington, company began providing computer time sharing. The company reportedly took on two bright young men to test its software. These young men both excelled in computer science, and were rumored to be skilled in the art of finding holes within systems. In exchange for testing company software, the young men were given free dial-up access to a PDP-10 (this would be the equivalent of getting free access to a private bulletin board system). Unfortunately for the boys, the company folded shortly thereafter, but the learning experience changed their lives. At the time, they were just old enough to attend high school. Today, they are in their forties. Can you guess their identities? The two boys were Bill Gates and Paul Allen.

In any event, by 1972, ARPANET had some 40 hosts (in today's terms, that is smaller than many local area networks, or LANs). It was in that year that Ray Tomlinson, a member of Bolt, Beranek, and Newman, Inc., forever changed the mode of communication on the network. Tomlinson created electronic mail.

Tomlinson's invention was probably the single most important computer innovation of the decade. E-mail allowed simple, efficient, and inexpensive communication between various nodes of the network. This naturally led to more active discussions and the open exchange of ideas. Because many recipients could be added to an e-mail message, these ideas were more rapidly implemented. (Consider the distinction between e-mail and the telephone. How many people can you reach with a modern conference call? Compare that to the number of people you can reach with a single e-mail message. For group-oriented research, e-mail cannot be rivaled.) From that point on, the Net was alive.

In 1974, Tomlinson contributed to another startling advance. He (in parallel with Vinton Cerf and Robert Khan) invented the Transmission Control Protocol (TCP). This protocol was a new means of moving data across the network bit by bit and then later assembling these fragments at the other end.


NOTE: TCP is the primary protocol used on the Internet today. It was developed in the early 1970s and was ultimately integrated into Berkeley Software Distribution UNIX. It has since become an Internet standard. Today, almost all computers connected to the Internet run some form of TCP. In Chapter 6, "A Brief Primer on TCP/IP," I closely examine TCP as well as its sister protocols.

By 1975, ARPANET was a fully functional network. The groundwork had been done and it was time for the U.S. government to claim its prize. In that year, control of ARPANET was given to an organization then known as the United States Defense Communications Agency (this organization would later become the Defense Information Systems Agency).

To date, the Internet is the largest and most comprehensive structure ever designed by humankind. Next, I will address some peripheral technological developments that helped form the network and bring it to its present state of complexity. To do this, I will start with C.

What Is C?

C is a popular computer programming language, often used to write language compilers and operating systems. I examine C here because its development (and its relationship to the UNIX operating system) is directly relevant to the Internet's development.

Nearly all applications designed to facilitate communication over the Internet are written in C. Indeed, both the UNIX operating system (which forms the underlying structure of the Internet) and TCP/IP (the suite of protocols used to traffic data over the Net) were developed in C. It is no exaggeration to say that if C had never emerged, the Internet as we know it would never have existed at all.

For most non-technical users, programming languages are strange, perplexing things. However, programming languages (and programmers) are the very tools by which a computer program (commonly called an application) is constructed. It may interest you to know that if you use a personal computer or workstation, better than half of all applications you now use were written in the C language. (This is true of all widely used platforms, including Macintosh.) In this section, I want to briefly discuss C and pay some homage to those who helped develop it. These folks, along with Paul Baran, Ken Thompson, and a handful of others, are the grandparents of the Internet.

C was created in the early 1970s by Dennis M. Ritchie and Brian W. Kernighan. These two men are responsible for many technological advancements that formed the modern Internet, and their names appear several times throughout this book.

Let's discuss a few basic characteristics of the C programming language. To start, C is a compiled as opposed to an interpreted language. I want to take a moment to explain this critical distinction because many of you may lack programming experience.

Interpreted Programming Languages

Most programs are written in plain, human-readable text. This text is made up of various commands and blocks of programming code called functions. In interpreted languages, this text remains in human-readable form. In other words, such a program file can be loaded into a text editor and read without event.

For instance, examine the program that follows. It is written for the Practical Extraction and Report Language (Perl). The purpose of this Perl program is to get the user's first name and print it back out to the screen.


NOTE: Perl is strictly defined as an interpreted language, but it does perform a form of compilation. However, that compilation occurs in memory and never actually changes the physical appearance of the programming code.

This program is written in plain English:

#!/usr/bin/perl
print "Please enter your first name:";
$user_firstname = <STDIN>;
chop($user_firstname);
print "Hello, $user_firstname\n"
print "Are you ready to hack?\n"

Its construction is designed to be interpreted by Perl. The program performs five functions:


Interpreted languages are commonly used for programs that perform trivial tasks or tasks that need be done only once. These are sometimes referred to as throwaway programs. They can be written quickly and take virtually no room on the local disk.

Such interpreted programs are of limited use. For example, in order to run, they must be executed on a machine that contains the command interpreter. If you take a Perl script and install it on a DOS-based machine (without first installing the Perl interpreter), it will not run. The user will be confronted with an error message (Bad command or file name). Thus, programs written in Perl are dependent on the interpreter for execution.

Microsoft users will be vaguely familiar with this concept in the context of applications written in Visual Basic (VB). VB programs typically rely on runtime libraries such as VBRUN400.DLL. Without such libraries present on the drive, VB programs will not run.


Cross Reference: Microsoft users who want to learn more about such library dependencies (but don't want to spend the money for VB) should check out Envelop. Envelop is a completely free 32-bit programming environment for Windows 95 and Windows NT. It very closely resembles Microsoft Visual Basic and generates attractive, fully functional 32-bit programs. It, too, has a set of runtime libraries and extensive documentation about how those libraries interface with the program. You can get it at ftp://ftp.cso.uiuc.edu/pub/systems/pc/winsite/win95/programr/envlp14.exe

The key advantages of interpreted languages include


Interpreted languages are popular, particularly in the UNIX community. Here is a brief list of some well-known interpreted languages:


The pitfall of using an interpreted language is that programs written in interpreted languages are generally much slower than those written in compiled languages.

Compiled Languages

Compiled languages (such as C) are much different. Programs written in compiled languages must be converted into binary format before they can be executed. In many instances, this format is almost pure machine-readable code. To generate this code, the programmer sends the human-readable program code (plain text) through a compilation process. The program that performs this conversion is called a compiler.

After the program has been compiled, no interpreter is required for its execution. It will run on any machine that runs the target operating system for which the program was written. Exceptions to this rule may sometimes apply to certain portions of a compiled program. For example, certain graphical functions are dependent on proprietary graphics libraries. When a C program is written using such graphical libraries, certain library components must be shipped with the binary distribution. If such library components are missing when the program is executed, the program will exit on error.

The first interesting point about compiled programs is that they are fast. Because the program is loaded entirely into memory on execution (as opposed to being interpreted first), a great deal of speed is gained. However, as the saying goes, there is no such thing as a free lunch. Thus, although compiled programs are fast, they are also much larger than programs written in interpreted languages.

Examine following the C program. It is identical in function to the Perl program listed previously. Here is the code in its yet-to-be-compiled state:

#include <stdio.h>
int main()
{
char name[20];
printf("Please enter your first name:   ");
scanf("%s", &name);
printf("Hello, %s\n", name);
printf("Are you ready to hack?\n");
return 0;
}

Using a standard C compiler, I compiled this code in a UNIX operating system environment. The difference in size between the two programs (the one in Perl and the one in C) was dramatic. The Perl program was 150 bytes in size; the C program, after being compiled, was 4141 bytes.

This might seem like a huge liability on the part of C, but in reality, it isn't. The C program can be ported to almost every operating system. Furthermore, it will run on any operating system of a certain class. If compiled for DOS, it will work equally well under all DOS-like environments (such as PC-DOS or NDOS), not just Microsoft DOS.

Modern C: The All-Purpose Language

C has been used over the years to create all manner of programs on a variety of platforms. Many Microsoft Windows applications have been written in C. Similarly, as I will explain later in this chapter, nearly all basic UNIX utilities are written in C.

To generate programs written in C, you must have a C compiler. C compilers are available for most platforms. Some of these are commercial products and some are free to the public. Table 7.1 lists common C compilers and the platforms on which they are available.

Table 7.1. C compilers and their platforms.

Compiler Platform
GNU C (free) UNIX, Linux, DOS, VAX
Borland C DOS, Windows, Windows NT
Microsoft C DOS, Windows, Windows NT
Watcom C DOS, Windows, Windows NT, OS/2
Metrowerks CodeWarrior Mac, Windows, BeOS
Symantec Macintosh, Microsoft platforms

Advantages of C

One primary advantage of the C language is that it is smaller than many other languages. The average individual can learn C within a reasonable period of time. Another advantage is that C now conforms to a national standard. Thus, a programmer can learn C and apply that knowledge on any platform, anywhere in the country.

C has direct relevance to the development of the Internet. As I have explained, most modern TCP/IP implementations are written in C, and these form the basis of data transport on the Internet. More importantly, C was used in the development of the UNIX operating system. As I will explain in the next section of this chapter, the UNIX operating system has, for many years, formed the larger portion of the Internet.

C has other advantages: One is portability. You may have seen statements on the Internet about this or that program being ported to another operating system or platform, and many of you might not know exactly what that means. Portability refers to the capability of a program to be re-worked to run on a platform other than the one for which it was originally designed (that is, the capability to take a program written for Microsoft Windows and port it to the Macintosh platform). This aspect of portability is very important, especially in an environment like the Internet, because the Internet has many different types of systems running on it. In order to make a program available networkwide, that program must be easily conformable to all platforms.

Unlike code in other languages, C code is highly portable. For example, consider Visual Basic. Visual Basic is a wonderful rapid application development tool that can build programs to run on any Microsoft-based platform. However, that is the extent of it. You cannot take the raw code of a VB application and recompile it on a Macintosh or a Sun Sparcstation.

In contrast, the majority of C programs can be ported to a wide variety of platforms. As such, C-based programs available for distribution on the Internet are almost always distributed in source form (in other words, they are distributed in plain text code form, or in a form that has not yet been compiled). This allows the user to compile the program specifically for his or her own operating system environment.

Limitations of C and the Creation of C++

Despite these wonderful features, C has certain limitations. C is not, for example, an object-oriented language. Managing very large programs in C (where the code exceeds 100,000 lines) can be difficult. For this, C++ was created. C++ lineage is deeply rooted in C, but works differently. Because this section contains only brief coverage of C, I will not discuss C++ extensively. However, you should note that C++ is generally included as an option in most modern C compilers.

C++ is an extremely powerful programming language and has led to dramatic changes in the way programming is accomplished. C++ allows for encapsulation of complex functions into entities called objects. These objects allow easier control and organization of large and complex programs.

In closing, C is a popular, portable, and lightweight programming language. It is based on a national standard and was used in the development of the UNIX operating system.


Cross Reference: Readers who want to learn more about the C programming language should obtain the book The C Programming Language by Brian W. Kernighan and Dennis M. Ritchie. (Prentice Hall, ISBN 0-13-110370-9). This book is a standard. It is extremely revealing; after all, it is written by two men who developed the language.

Other popular books on C include

C: A Reference Manual. Samuel P. Harbison and Guy L. Steele. Prentice-Hall. ISBN 0-13-109802-0. 1987.

Teach Yourself C in 21 Days. Peter Aitkin and Bradley Jones. Sams Publishing. ISBN 0-672-30448-1.

Teach Yourself C. Herbert Schildt. Osborne McGraw-Hill. ISBN 0-07-881596-7.


UNIX

The UNIX operating system has a long and rich history. Today, UNIX is one of the most widely used operating systems, particularly on the Internet. In fact, UNIX actually comprises much of the Net, being the number one system used on servers in the void.

Created in 1969 by Ken Thompson of Bell Labs, the first version of UNIX ran on a Digital Equipment Corporation (DEC) PDP-7. Of course, that system bore no resemblance to modern UNIX. For example, UNIX has been traditionally known as a multiuser system (in other words, many users can work simultaneously on a single UNIX box). In contrast, the system created by Thompson was reportedly a single-user system, and a bare bones one at that.

When users today think of an operating system, they imagine something that includes basic utilities, text editors, help files, a windowing system, networking tools, and so forth. This is because the personal computer has become a household item. As such, end-user systems incorporate great complexity and user-friendly design. Alas, the first UNIX system was nothing like this. Instead, it was composed of only the most necessary utilities to operate effectively. For a moment, place yourself in Ken Thompson's position. Before you create dozens of complex programs like those mentioned previously, you are faced with a more practical task: getting the system to boot.

In any event, Thompson and Dennis Ritchie ported UNIX to a DEC PDP-11/20 a year later. From there, UNIX underwent considerable development. Between 1970 and 1973, UNIX was completely reworked and written in the C programming language. This was reportedly a major improvement and eliminated many of the bugs inherent to the original implementation.

In the years that followed, UNIX source code was distributed to universities throughout the country. This, more than any other thing, contributed to the success of UNIX.

First, the research and academic communities took an immediate liking to UNIX. Hence, it was used in many educational exercises. This had a direct effect on the commercial world. As explained by Mike Loukides, an editor for O'Reilly & Associates and a UNIX guru:

Schools were turning out loads of very competent computer users (and systems programmers) who already knew UNIX. You could therefore "buy" a ready-made programming staff. You didn't have to train them on the intricacies of some unknown operating system.

Also, because the source was free to these universities, UNIX was open for development by students. This openness quickly led to UNIX being ported to other machines, which only increased the UNIX user base.


NOTE: Because UNIX source is widely known and available, more flaws in the system security structure are also known. This is in sharp contrast to proprietary systems. Such proprietary software manufacturers refuse to disclose their source except to very select recipients, leaving many questions about their security as yet unanswered.

Several years passed, and UNIX continued to gain popularity. It became so popular, in fact, that in 1978, AT&T decided to commercialize the operating system and demand licensing fees (after all, it had obviously created a winning product). This caused a major shift in the computing community. As a result, the University of California at Berkeley created its own version of UNIX, thereafter referred to as the Berkeley Software Distribution or BSD. BSD was (and continues to be) extremely influential, being the basis for many modern forms of commercial UNIX.

An interesting development occurred during 1980. Microsoft released a new version of UNIX called XENIX. This was significant because the Microsoft product line was already quite extensive. For example, Microsoft was selling versions of BASIC, COBOL, Pascal, and FORTRAN. However, despite a strong effort by Microsoft to make its XENIX product fly (and even an endorsement by IBM to install the XENIX operating system on its new PCs), XENIX would ultimately fade into obscurity. Its popularity lasted a mere five years. In contrast, MS-DOS (released only one year after XENIX was introduced) took the PC world by storm.

Today, there are many commercial versions of UNIX. I have listed a few of the them in Table 7.2.

Table 7.2. Commercial versions of UNIX and their manufacturers.

UNIX Version Software Company
SunOS & Solaris Sun Microsystems
HP-UX Hewlett Packard
AIX IBM
IRIX Silicon Graphics (SGI)
DEC UNIX Digital Equipment Corporation

These versions of UNIX run on proprietary hardware platforms, on high-performance machines called workstations. Workstations differ from PC machines in several ways. For one thing, workstations contain superior hardware and are therefore more expensive. This is due in part to the limited number of workstations built. PCs are manufactured in large numbers, and manufacturers are constantly looking for ways to cut costs. A consumer buying a new PC motherboard has a much greater chance of receiving faulty hardware. Conversely, workstation buyers enjoy more reliability, but may pay five or even six figures for their systems.

The trade-off is a hard choice. Naturally, for average users, workstations are both impractical and cost prohibitive. Moreover, PC hardware and software are easily obtainable, simple to configure, and widely distributed.

Nevertheless, workstations have traditionally been more technologically advanced than PCs. For example, onboard sound, Ethernet, and SCSI were standard features of workstations in 1989. In fact, onboard ISDN was integrated not long after ISDN was developed.

Differences also exist depending upon manufacturer. For example, Silicon Graphics (SGI) machines contain special hardware (and software) that allows them to generate eye-popping graphics. These machines are commonly used in the entertainment industry, particularly in film. Because of the extraordinary capabilities of the SGI product line, SGI workstations are unrivaled in the graphics industry.

However, we are only concerned here with the UNIX platform as it relates to the Internet. As you might guess, that relationship is strong. As I noted earlier, the U.S. government's development of the Internet was implemented on the UNIX platform. As such, today's UNIX system contains within it the very building blocks of the Net. No other operating system had ever been so expressly designed for use with the Internet. (Although Bell Labs is currently developing a system that may even surpass UNIX in this regard. It is called Plan 9 from Bell Labs; Plan 9 is covered in Chapter 21, "Plan 9 from Bell Labs.")

Modern UNIX can run on a wide variety of platforms, including IBM-compatible and Macintosh. Installation is typically straightforward and differs little from installation of other operating systems. Most vendors provide CD-ROM media. On workstations, installation is performed by booting from a CD-ROM. The user is given a series of options and the remainder of the installation is automatic. On other hardware platforms, the CD-ROM medium is generally accompanied by a boot disk that loads a small installation routine into memory.

Likewise, starting a UNIX system is similar to booting other systems. The boot routine makes quick diagnostics of all existing hardware devices, checks the memory, and starts vital system processes. In UNIX, some common system processes started at boot include


After the system boots successfully, a login prompt is issued to the user. Here, the user provides his or her login username and password. When login is complete, the user is generally dropped into a shell environment. A shell is an environment in which commands can be typed and executed. In this respect, at least in appearance, basic UNIX marginally resembles MS-DOS. Navigation of directories is accomplished by changing direction from one to another. DOS users can easily navigate a UNIX system using the conversion information in Table 7.3.

Table 7.3. Command conversion table: UNIX to DOS.

DOS Command UNIX Equivalent
cd <directory> cd <directory>
dir ls -l
type|more more
help <command> man <command>
edit vi


Cross Reference: Readers who wish to know more about basic UNIX commands should point their WWW browser to http://www.geek-girl.com/Unixhelp/. This archive is one of the most comprehensive collections of information about UNIX currently online.
Equally, more serious readers may wish to have a handy reference at their immediate disposal. For this, I recommend UNIX Unleashed (Sams Publishing). The book was written by several talented UNIX wizards and provides many helpful tips and tricks on using this popular operating system.

Say, What About a Windowing System?

UNIX supports many windowing systems. Much depends on the specific platform. For example, most companies that have developed proprietary UNIX systems have also developed their own windowing packages, either partially or completely. In general, however, all modern UNIX systems support the X Window System from the Massachusetts Institute of Technology (MIT). Whenever I refer to the X Window System in this book (which is often), I refer to it as X. I want to quickly cover X because some portions of this book require you to know about it.

In 1984, the folks at MIT founded Project Athena. Its purpose was to develop a system of graphical interface that would run on workstations or networks of disparate design. During the initial stages of research, it immediately became clear that in order to accomplish this task, X had to be hardware independent. It also had to provide transparent network access. As such, X is not only a windowing system, but a network protocol based on the client/server model.

The individuals primarily responsible for early development of X were Robert Scheifler and Ron Newman, both from MIT, and Jim Gettys of DEC. X vastly differs from other types of windowing systems (for example, Microsoft Windows), even with respect to the user interface. This difference lies mainly in a concept sometimes referred to as workbench or toolkit functionality. That is, X allows users to control every aspect of its behavior. It also provides an extensive set of programming resources. X has often been described as the most complex and comprehensive windowing system ever designed. X provides for high-resolution graphics over network connections at high speed and throughput. In short, X comprises some of the most advanced windowing technology currently available. Some users characterize the complexity of X as a disadvantage, and there is probably a bit of merit to this. So many options are available that the casual user may quickly be overwhelmed.


Cross Reference: Readers who wish to learn more about X should visit the site of the X Consortium. The X Consortium comprises the authors of X. This group constantly sets and improves standards for the X Window System. Its site is at http://www.x.org/.


NOTE: Certain versions of X can be run on IBM-compatible machines in a DOS/Windows Environment.

Users familiar with the Microsoft platform can grasp the use of X in UNIX by likening it to the relationship between DOS and Microsoft Windows 3.11. The basic UNIX system is always available as a command-line interface and remains active and accessible, even when the user enters the X environment. In this respect, X runs on top of the basic UNIX system. While in the X environment, a user can access the UNIX command-line interface through a shell window (this at least appears to function much like the MS-DOS prompt window option available in Microsoft Windows). From this shell window, the user can perform tasks, execute commands, and view system processes at work.

Users start the X Window System by issuing the following command:

startx

X can run a series of window managers. Each window manager has a different look and feel. Some of these (such as twm) appear quite bare bones and technical, while others are quite attractive, even fancy. There is even one X window manager available that emulates the Windows 95 look and feel. Other platforms are likewise emulated, including the NeXT window system and the Amiga Workbench system. Other windowing systems (some based on X and some proprietary) are shown in Table 7.4.

Table 7.4. Common windowing systems in UNIX.

Window System Company
OpenWindows Sun Microsystems
AIXWindows IBM
HPVUE Hewlett Packard
Indigo Magic Silicon Graphics

What Kinds of Applications Run on UNIX?

Many types of applications run on UNIX. Some of these are high-performance applications for use in scientific research and artificial intelligence. I have already mentioned that certain high-level graphics applications are also common, particularly to the SGI platform. However, not every UNIX application is so specialized or eclectic. Perfectly normal applications run in UNIX, and many of them are recognizable names common to the PC and Mac communities (such as Adobe Photoshop, WordPerfect, and other front-line products).

Equally, I don't want readers to get the wrong idea. UNIX is by no means a platform that lacks a sense of humor or fun. Indeed, there are many games and amusing utilities available for this unique operating system.

Essentially, modern UNIX is much like any other platform in this respect. Window systems tend to come with suites of applications integrated into the package. These include file managers, text editors, mail tools, clocks, calendars, calculators, and the usual fare.

There is also a rich collection of multimedia software for use with UNIX, including movie players, audio CD utilities, recording facilities for digital sound, two-way camera systems, multimedia mail, and other fun things. Basically, just about anything you can think of has been written for UNIX.

UNIX in Relation to Internet Security

Because UNIX supports so many avenues of networking, securing UNIX servers is a formidable task. This is in contrast to servers implemented on the Macintosh or IBM-compatible platforms. The operating systems most common to these platforms do not support anywhere close to the number of network protocols natively available under UNIX.

Traditionally, UNIX security has been a complex field. In this respect, UNIX is often at odds with itself. UNIX was developed as the ultimate open system (that is, its source code has long been freely available, the system supports a wide range of protocols, and its design is uniquely oriented to facilitate multiple forms of communication). These attributes make UNIX the most popular networking platform ever devised. Nevertheless, these same attributes make security a difficult thing to achieve. How can you allow every manner of open access and fluid networking while still providing security?

Over the years, many advances have been made in UNIX security. These, in large part, were spawned by governmental use of the operating system. Most versions of UNIX have made it to the Evaluated Products List (EPL). Some of these advances (many of which were implemented early in the operating system's history) include


UNIX is used in many environments that demand security. As such, there are hundreds of security programs available to tune up or otherwise improve the security of a UNIX system. Many of these tools are freely available on the Internet. Such tools can be classified into two basic categories:


Security audit tools tend to be programs that automatically detect holes within systems. These typically check for known vulnerabilities and common misconfigurations that can lead to security breaches. Such tools are designed for wide-scale network auditing and, therefore, can be used to check many machines on a given network. These tools are advantageous because they reveal inherent weaknesses within the audited system. However, these tools are also liabilities because they provide powerful capabilities to crackers in the void. In the wrong hands, these tools can be used to compromise many hosts.

Conversely, system logging tools are used to record the activities of users and system messages. These logs are recorded to plain text files or files that automatically organize themselves into one or more database formats. Logging tools are a staple resource in any UNIX security toolbox. Often, the logs generated by such utilities form the basis of evidence when you pursue an intruder or build a case against a cracker. However, deep logging of the system can be costly in terms of disk space. Moreover, many of these tools work flawlessly at collecting data, but provide no easy way to interpret it. Thus, security personnel may be faced with writing their own programs to perform this task.

UNIX security is a far more difficult field than security on other platforms, primarily because UNIX is such a large and complicated operating system. Naturally, this means that obtaining personnel with true UNIX security expertise may be a laborious and costly process. For although these people aren't rare particularly, most of them already occupy key positions in firms throughout the nation. As a result, consulting in this area has become a lucrative business.

One good point about UNIX security is that because UNIX has been around for so long, much is known about its inherent flaws. Although new holes crop up on a fairly regular basis, their sources are quickly identified. Moreover, the UNIX community as a whole is well networked with respect to security. There are many mailing lists, archives, and online databases of information dealing with UNIX security. The same cannot be so easily said for other operating systems. Nevertheless, this trend is changing, particularly with regard to Microsoft Windows NT. There is now strong support for NT security on the Net, and that support is growing each day.

The Internet: How Big Is It?

This section requires a bit more history, and I am going to run through it rapidly. Early in the 1980s, the Internet as we now know it was born. The number of hosts was in the hundreds, and it seemed to researchers even then that the Internet was massive. Sometime in 1986, the first freely available public access server was established on the Net. It was only a matter of time--a mere decade, as it turned out--before humanity would storm the beach of cyberspace; it would soon come alive with the sounds of merchants peddling their wares.

By 1988, there were more than 50,000 hosts on the Net. Then a bizarre event took place: In November of that year, a worm program was released into the network. This worm infected numerous machines (reportedly over 5,000) and left them in various stages of disrupted service or distress (I will discuss this event in Chapter 5, "Is Security a Futile Endeavor?"). This brought the Internet into the public eye in a big way, plastering it across the front pages of our nation's newspapers.

By 1990, the number of Internet hosts exceeded 300,000. For a variety of reasons, the U.S. government released its hold on the network in this year, leaving it to the National Science Foundation (NSF). The NSF had instituted strong restrictions against commercial use of the Internet. However, amidst debates over cost considerations (operating the Internet backbone required substantial resources), NSF suddenly relinquished authority over the Net in 1991, opening the way for commercial entities to seize control of network bandwidth.

Still, however, the public at large did not advance. The majority of private Internet users got their access from providers like Delphi. Access was entirely command-line based and far too intimidating for the average user. This changed suddenly when revolutionary software developed at the University of Minnesota was released. It was called Gopher. Gopher was the first Internet navigation tool for use in GUI environments. The World Wide Web browser followed soon thereafter.

In 1995, NSF retired entirely from its long-standing position as overseer of the Net. The Internet was completely commercialized almost instantly as companies across America rushed to get connected to the backbone. The companies were immediately followed by the American public, which was empowered by new browsers such as NCSA Mosaic, Netscape Navigator, and Microsoft Internet Explorer. The Internet was suddenly accessible to anyone with a computer, a windowing system, and a mouse.

Today, the Internet sports more than 10 million hosts and reportedly serves some 40 million individuals. Some projections indicate that if Internet usage continues along its current path of growth, the entire Western world will be connected by the year 2001. Barring some extraordinary event to slow this path, these estimates are probably correct.

Today's Internet is truly massive, housing hundreds of thousands of networks. Many of these run varied operating systems and hardware platforms. Well over 100 countries besides the United States are connected, and that number is increasing every year. The only question is this: What does the future hold for the Internet?

The Future

There have been many projections about where the Internet is going. Most of these projections (at least those of common knowledge to the public) are cast by marketeers and spin doctors anxious to sell more bandwidth, more hardware, more software, and more hype. In essence, America's icons of big business are trying to control the Net and bend it to their will. This is a formidable task for several reasons.

One is that the technology for the Internet is now moving faster than the public's ability to buy it. For example, much of corporate America is intent on using the Internet as an entertainment medium. The network is well suited for such purposes, but implementation is difficult, primarily because average users cannot afford the necessary hardware to receive high-speed transmissions. Most users are getting along with modems at speeds of 28.8Kbps. Other options exist, true, but they are expensive. ISDN, for example, is a viable solution only for folks with funds to spare or for companies doing business on the Net. It is also of some significance that ISDN is more difficult to configure--on any platform--than the average modem. For some of my clients, this has been a significant deterrent. I occasionally hear from people who turned to ISDN, found the configuration problems overwhelming, and found themselves back at 28.8Kbps with conventional modems. Furthermore, in certain parts of the country, the mere use of an ISDN telephone line costs money per each minute of connection time.


NOTE: Although telephone companies initially viewed ISDN as a big money maker, that projection proved to be somewhat premature. These companies envisioned huge profits, which never really materialized. There are many reasons for this. One is that ISDN modems are still very expensive compared to their 28.8Kbps counterparts. This is a significant deterrent to most casual users. Another reason is that consumers know they can avoid heavy-duty phone company charges by surfing at night. (For example, many telephone companies only enforce heavy charges from 8:00 a.m. to 5:00 p.m.) But these are not the only reasons. There are other methods of access emerging that will probably render ISDN technology obsolete. Today's consumers are keenly aware of these trends, and many have adopted a wait-and-see attitude.

Cable modems offer one promising solution. These new devices, currently being tested throughout the United States, will reportedly deliver Net access at 100 times the speed of modems now in use. However, there are deep problems to be solved within the cable modem industry. For example, no standards have yet been established. Therefore, each cable modem will be entirely proprietary. With no standards, the price of cable modems will probably remain very high (ranging anywhere from $300 to $600). This could discourage most buyers. There are also issues as to what cable modem to buy. Their capabilities vary dramatically. Some, for example, offer extremely high throughput while receiving data but only meager throughput when transmitting it. For some users, this simply isn't suitable. A practical example would be someone who plans to video-conference on a regular basis. True, they could receive the image of their video-conference partner at high speed, but they would be unable to send at that same speed.


NOTE: There are other more practical problems that plague the otherwise bright future of cable modem connections. For example, consumers are told that they will essentially have the speed of a low-end T3 connection for $39 a month, but this is only partially true. Although their cable modem and the coax wire it's connected to are capable of such speeds, the average consumer will likely never see the full potential because all inhabitants in a particular area (typically a neighborhood) must share the bandwidth of the connection. For example, in apartment buildings, the 10mps is divided between the inhabitants patched into that wire. Thus, if a user in apartment 1A is running a search agent that collects hundreds of megabytes of information each day, the remaining inhabitants in other apartments will suffer a tremendous loss of bandwidth. This is clearly unsuitable.


Cross Reference: Cable modem technology is an aggressive climate now, with several dozen big players seeking to capture the lion's share of the market. To get in-depth information about the struggle (and what cable modems have to offer), point your Web browser to http://rpcp.mit.edu/~gingold/cable/.

Other technologies, such as WebTV, offer promise. WebTV is a device that makes surfing the Net as easy as watching television. These units are easily installed, and the interface is quite intuitive. However, systems such as WebTV may bring an unwanted influence to the Net: censorship. Many of the materials on the Internet could be characterized as highly objectionable. In this category are certain forms of hard-core pornography and seditious or revolutionary material. If WebTV were to become the standard method of Internet access, the government might attempt to regulate what type of material could appear. This might undermine the grass-roots, free-speech environment of the Net.


NOTE: Since the writing of this chapter, Microsoft Corporation has purchased WebTV (even though the sales for WebTV proved to be far less than industry experts had projected). Of course, this is just my personal opinion, but I think the idea was somewhat ill-conceived. The Internet is not yet an entertainment medium, nor will it be for some time, largely due to speed and bandwidth constraints. One wonders whether Microsoft didn't move prematurely in making its purchase. Perhaps Microsoft bought WebTV expressly for the purpose of shelving it. This is possible. After all, such a purchase would be one way to eliminate what seemed (at least at the time) to be some formidable competition to MSN.


Cross Reference: WebTV does have interesting possibilities and offers one very simple way to get acquainted with the Internet. If you are a new user and find Net navigation confusing, you might want to check out WebTV's home page at http://www.webtv.net/.

Either way, the Internet is about to become an important part of every American's life. Banks and other financial institutions are now offering banking over the Internet. Within five years, this will likely replace the standard method of banking. Similarly, a good deal of trade has been taken to the Net.

Summary

This chapter briefly examines the birth of the Internet. Next on the agenda are the historical and practical points of the network's protocols, or methods of data transport. These topics are essential for understanding the fundamentals of Internet security.


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.