Friday, August 29, 2014

And finally...CHAPTER 1

Here's chapter 1. Let me know what you think. Am I missing anything? What else should be included? I am sure there are other issues. Please share by commenting below.

CHAPTER 1: Introduction

What is Networking

What is networking?  Why do we care about it?  What does it allow us to do?  Networking means different things to different people.  It is a way to stay connected.  It is a technology.  It is hardware.  It is software.  But, while networking is many things, in order to be able to discuss it; in order to be able to break down it’s integral parts in order to understand how it works and what that means to us, we need a more formal definition of what networking is.  With that in mind, networking is the connection of two or more computers in order to be able to communicate, share hardware, share files, etc.  It is that capability that makes  networking so valuable; to be able to do more with less, more efficiently.
Networks allow us to communicate at speeds and in ways that we never could before.  Email, blogs, websites, Voice over IP (VoIP), and so on all allow us to share rich synchronous and asynchronous information, easily accessible anywhere on a variety of devices.
Files can be accessed from around the world creating collaborative opportunities that simply would not have been feasible a few short years ago. Hardware can be shared, making for more efficient allocation of resources. For example, rather than purchasing a printer for every computer within an organization, a shared printer can be utilized.  This has more far more reaching effects than just saving on the purchase price of each and every printer. Consider the costs associated with setting up each of those printers. The costs associated with troubleshooting them when there is a problem. The costs of maintaining print cartridges and paper for each individual printer. Quickly, it becomes obvious that the benefits of shared hardware far outweigh both the direct and the indirect costs.Figure 1.1.png
Another benefit of networking is that it creates a managerial opportunity to centralize and coordinate efforts. Before networking existed, in order to share files, one would have to access the sneakernet; a network whereby one would save a file to a floppy disk and physically take that disk to another computer in order the share it. Obviously this could be cumbersome, especially as those with whom you wanted to share the file with became further away from your current location. But there was an additional problem that this created which was the decentralization of data. From a network manager’s perspective, this was bad.  Why?  How do you create a process to safeguard that data?  To protect it from prying eyes?  To back it up to protect against viruses or hardware failure?  An early solution was to implement procedures whereby individuals were responsible for such issues.  However, relying on the weakest link in an information system, people, is, let’s just say, risky at best.  A better solution is to centralize and automate as much as you can.  With data stored in a centralized location, or at least controlled from a centralized location,  it can be more easily and safely backed up and secured.

History of Computing

To truly understand all of the benefits that networking provides and the direction that networking is going, we must first understand from where we came. The origins of computing are difficult to pinpoint. There are numerous events which have contributed to the field we know and love. The pint is that no single event led to the development of the computer. No single individual was responsible for the development of the computer. Yet these various events, sometimes in complete isolation from one another, all contributed to the field of computing that exists today. It is by understanding the origins of computing that helps to inform us of the directions computing will move towards in the future.


Blaise Pascal (1623-1666): French philosopher and mathematician, Pascal developed one of the first mechanical calculators.  It served as an inspiration for those who followed.  It was for him after which the programming language PASCAL was named.  His design, at the ripe old age of 19, used suspended weights which allowed for addition but not subtraction.
Gottfried Leibnitz (1646-1716): German philosopher who, much like Pascal, was interested in developing a mechanical machine to aid in doing basic calculations.  Unlike, Pascal, his machine used rotating drums which made it capable of calculating addition, subtraction, multiplication, and division.
Charles Babbage (1791-1871): a British mathematician, he designed first the Difference Engine and then later the Analytical Engine.  It is interesting to note that he was unable to complete either device largely due to the inability to manufacture gears with enough precision.  Babbage introduced the concept of conditional branching, an important concept in computing.
Herman Hollerith (1860-1929): Son of German immigrants, Hollerith was born in Buffalo New York.  Working on counting the 1880 census, Hollerith observed the painstaking effort and the amount of time that it took (7 years).  Hollerith then began work on a tabulator machine to speed the process.  The original company that he formed, after various mergers and acquisitions, ultimately became known as International Business Machines (IBM).
mauchly
John Mauchly (1907-1980): An American who, along with Eckert, pioneered electronic computing with logic structures.  Though initially interested in developing a computer to predict weather, Mauchly and Eckert built ENIAC to calculate firing tables for the military during World War II.  This was followed by UNIVAC which successfully predicted the 1952 presidential election.  Though they tried to commercialize their product, various court cases and more business savvy competitors ultimately did in Mauchly and Eckert.
http://www.computerworld.com/computerworld/records/images/story/JohnPresperEckert_secondary.jpg
John Eckert (1919-1995): An American electrical engineer who, with Mauchly, helped to develop first the ENIAC and later the UNIVAC.  Though a genius in his field, neither he nor Mauchly had a mind for business and were unable to successfully launch their business.
Alan Turing (1912-1954): A British mathematician who helped the British develop code breaking hardware to defeat the German Enigma machine.  Ostracized after his homosexuality became public, Turing died of an apparent suicide though it was under mysterious conditions.  It was speculated that his knowledge about code breaking and security made him a security risk and this may have led to a need for him to be silenced.
Tommy Flowers (1905-1998):  Engineer for the British during World War II who helped them develop arguably the world’s first computer, Colossus.  Colossus was used to break German codes.  Unfortunately for Flowers, his work went largely unknown for years as Colossus was seen as a state secret for security purposes by the British and most of the machines were destroyed.
http://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Konrad_Zuse_%281992%29.jpg/220px-Konrad_Zuse_%281992%29.jpg
Konrad Zusa (1910-1995): German civil engineer and inventor who built one of the world’s first computers during World War II.  His work went largely unnoticed in the west and was weakly supported by the Nazis as they were unable to see the value in Zusa’s work.  
http://www.thocp.net/biographies/pictures/aiken_howard_sml.jpg
Howard Aiken (1900-1973):  An American electrical engineer who, after returning to school to work on his Ph.D., began work on a machine to increase the speed necessary to carry out his calculations.  After teaming up with IBM in order to obtain significant investment, they developed the Mark I, largely based on Babbage’s design for the Analytical Engine.
File:Grace Hopper.jpg
Grace Hopper (1906-1992): An American Naval Officer and computer scientist, Hopper helped Aiken in the development of the Mark I.  She also helped to develop the COmmon Business Oriented Language (COBOL) programming language.  She also worked for Mauchly and Eckert as a mathematician in the development of the UNIVAC.
http://www.thocp.net/biographies/pictures/atanasoff_john2.gif
John V. Atanasoff (1903-1995): An American who, like so many of his contemporaries, was faced with daunting mathematical calculations during work on his dissertation.  His work towards a solution led him to develop arguably the world’s first electronic computer.  Though it lacked the ability to incorporate conditional statements, it was completely electronic and used the binary numbering system, of which we owe our modern day reliance this numbering system.
berry1.jpg (62865 bytes)
Clifford Berry (1918-1963):  Worked with Atanasoff to design and build the ABC computer, the Atanasoff-Berry Computer.
http://upload.wikimedia.org/wikipedia/commons/f/f7/Richard_Matthew_Stallman.jpeg
Richard Stallman (1953-): Pioneer in modern American computing who started out in the area of Artificial Intelligence. Stallman is an advocate for free software and originated the concept of open source software. He is the founder of the Free Software Foundation.
http://www.biography.com/imported/images/Biography/Images/Profiles/G/Bill-Gates-9307520-1-402.jpg
William (Bill) H. Gates (1955-): One of the original founders of Microsoft, Gates revolutionized the software industry with concept of software licensing whereby the rights to use software rather than a transfer of ownership was occurring. Microsoft gained dominance in the software industry with the success of its operating systems.

Stephen (Steve) Wozniac (1950-): Co-founder of Apple, Wozniak was a former employee of Hewlett Packard and the brains behind the hardware. As Apple took off, Wozniak’s role in Apple diminished.
http://i.i.com.com/cnwk.1d/i/tim/2012/10/03/steve-jobs.png
Steven (Steve) Paul Jobs (1955-2011): Co-founder of Apple and NEXT.  A true visionary who was able to shape market demand through marketing and innovative design.
Table 1: Prominent Figures in Computing History
Table 1 lists a variety of prominent figures in computer history.  There were many, MANY more. A common theme in the early development of mechanical computing devices into the early electronic computers was the desire to automate mathematical calculations. The underlying reasons why, differed from person to person. Pascal for example wanted to develop a mechanical device to help calculate taxes for his father, a tax collector. Mauchly had developed complex mathematical models to predict weather patterns and needed a device that could more quickly and accurately perform the calculations. But, the underlying reason was often the same: the need for faster, more accurate calculations.


These pioneers developed the foundations from which we still work today. Concepts as the “mill”, which performed the functions similar to what a central processing unit (CPU) is today, would be used to carry out complex instruction. The “store” similar to modern day random access memory (RAM), was used to store results from calculations. These concepts made their way through design after design, serving as building blocks to get to where we are today. The point to keep in mind is that some of the devices hitting the market today may serve as the foundation for a much larger market in the future.

History of Networking

While the development of the computer was necessary to increase the speed and accuracy of calculation, it was not sufficient in order to be able to completely leverage the full effect that computer power could have on society. It was not until the value of linking computers together became apparent that computers started to evolve beyond being merely fast number crunchers. In 1945, Vennevar Bush wrote an article named “As We May Think” in which he outlined several seemingly far off, distant concepts. One such concept was something he called the Memex Machine. Though the description is a little crude, Bush described essentially what would eventually become the world wide web. He described a device that would link related documents together, making it easier to retrieve needed information. Bush, Director and Chief of Scientific Research and Development at the U.S. Office of Scientific Research and Development (OSRD), led what would eventually become the Defense Advanced Research Projects Agency (DARPA). Eventually, DARPA, headed by Bob Taylor would seek to develop a system whereby four computers could be connected to each other in order to share files and programs. The Internet had been conceived in the form of the DARPANet and began functioning in 1969.
While DARPANet represented the world’s first packet switching networking, it was a far cry from the Internet that we know today. Those early days of the DARPANet were met with fairly rapid expansion but this led to some complications which plague networking designs even today; the issue of scalability. It turned out that the protocols developed worked fine for small networks but as the networks grew in size, those early protocols were not particularly scalable. These problems led to the development of transmission control protocol (TCP), designed to break messages apart at their origin and reassemble them at their destination as well as Internet protocol (IP), design to route packets through a network from their origin to their destination. Developed separately to address different issues, TCO/IP have grown to be the backbone of modern day networking.
While TCP/IP were great for wide area network (WAN) connections, computers were becoming smaller and more powerful meaning that multiple computers could be grouped together in the same room. In other words, there was a shift away from mainframes and a push towards personal computers. But, they were largely hobbyist toys initially until Steve Jobs and Steve Wozniak launched the first, real personal computer to be marketed to the masses. But, the real break came when they visited the visionaries at Xerox’s Palo Alto Research Center (PARC). While the two Steves learned a lot from that visit including the existence of object oriented computing and the graphical user interface, it was the plethora of other innovative ideas that were bursting at the seems at PARC that really made the lit the fire under the explosion that was about to happen in networking. Out of PARC grew CISCO and 3Com, companies who became synonymous with networking through their production of switches, routers, and networking interface cards (NICs) based on the Ethernet standard. Modern networking was born. It was now cheap and easy to connect personal computers to share hardware and software. Networking had coming into its own.

Why We Use Networks

Why are networks in such demand?  It’s all about green.  No, not green computing.  Not directly anyway.  It’s about money.  Networks allow for the easy sharing of information such as files and the easy sharing of hardware such as printers.  Without networking, social networks as we know them today would not be possible.  There would be no email.  No smart phones.  No Internet.  Networking is what ties the various computing devices in an office, a university, and around the globe together in such a way that information is seamlessly shared.  The result is the ability to share that information which is nothing new.  What is new is the ability to share so much information so quickly.  That speed and volume allows organizations to save money…LOTS of money.

Network Models

What is a model? A model is a simplified version of reality used to conceptualize and analyze a more complex phenomena. Think about a model air plane or a model car or a model ship. They allow us to view various aspects of real life planes, cars, or ships and learn and share ideas about these items. When it comes to models, we can create different types of models to illustrate different aspects of whatever it is that we are interested in observing. Networks are no different. When it comes to networks, we can develop models to reflect their topology, the geographic scope, the relative relationship between nodes, and finally based on the architecture that reflects how data traverses networked systems.

Topologies

Models, as it relates to networking, are often seen as topologies. A topology is defined as "The study of geometric properties and spatial relations unaffected by the continuous change of shape or size of figures" according to Google. In the context of networking, this refers to the relationships (shapes) between computing devices including routers, switches, computers, and the media with which they are connected. Before we get into some of the more common topologies in networking, we must first discuss an issue that many students often struggle with; the distinction between logical and physical topologies. This distinction, between logical and physical concepts in networking, extends to logical and physical topologies as well as designs. But, as it relates to logical and physical topologies specifically, one simply has to focus on the data. For logical topologies, how does the data flow? Does it go directly from sending node to the destination node? Does it go through an intermediary device such as a hub, switch, or router? Does it travel through other nodes? The reason this is important is that it tells you a little bit of information about the technology being used on the network. If the data travels in a circular fashion, then logically, it is a ring network. If it travels though an intermediary device direction to its destination, then it is a star topology. If it travels directly to the other device, it is likely at least a partial mesh if not a full mesh topology. Physically, a token rink network and an Ethernet network might look identical. Yet, if nodes of each type are connected to the same network segment, they will not be able to communicate. Let's see if examining how some of the common topologies look and come back to this to see if it makes more sense after seeing some of the more common topologies.
The physical topology of a network refers to the physical relationship between the nodes (computers) on a network. Often it is the same as the logical topology, but not always. For example, when a network has a physical star topology with a hub rather than a switch at the center, physically, it is a star, but logically it is a bus topology. To visualize this, consider the bus as being internal to the hub. All ports receive the signal. Similarly, a star physical topology can logically be a ring topology if the switch is a token ring switch in which the ring exists within the switch rather than in the wiring between the nodes. See below for more detailed descriptions of common topologies.Ring Topology.png
  • Ring Design: In a ring design, each node is connected to exactly two other nodes. As a message travels from the sending node to the receiving node, the message must pass through each node in between the two.
    • Advantages:
      • Access to the network is very controlled
      • Must have token to communicate
      • Very scalable as the use of the network grows
      • No need for network server to control access to network
      • Additional components do not affect the performance of the network
      • Each computer has equal access to resources
    • Disadvantages:
      • Each packet must pass through other nodes on the network making it relatively slow
      • If one node goes down, the entire network is affected
      • Network is highly dependent to the media used
      • Hardware (NICs, Hubs, Switches, etc) are relatively expensive compared to Ethernet networkPartial Mesh Topology.png
  • Partial Mesh: As it relates to the graphic, mesh networks, or partial mesh as they are more commonly referred to, links node to multiple other nodes. But, not all nodes are connected to all other nodes.
    • Advantages
      • Data can be transferred to different devices simultaneously
      • If a transmission line fails, data can still travel other routes
      • Expansion and modification can be done without disrupting other nodes
    • Disadvantages
      • High chance of redundancy in many network connections
      • High cost relative to many other designs
      • setup, administration, and maintenance is difficultStar Topology.png
  • Star: A star based physical topology or some variation thereof, is the most common configuration. In a start based physical topology, a central device, usually a switch though a hub can potentially be used. Each node is connected independently to the centralized device.
    • Advantages
      • Messages travel directly to the intended node, traveling only through the centralized device (assume a switch is used)
      • Easy to connect new devices without disrupting current devices on the network
      • Centralized management makes it easy to manage the network
      • Failure of one node does not affect the rest of the network
    • Disadvantages
      • Dependence on central device means that if that device fails, the entire network fails
      • Use of a centralized device creates an additional costs to the network for the device itself
      • Performance of the network is reduced as new nodes are added
      • The number of nodes able to be connected is limited to the capacity of the centralized deviceFull-Mesh Topology.png
  • Full-Connected Mesh: Each node is connected to each and every other node on the network.
    • Advantages
      • Data can be transmitted and received from multiple other nodes simultaneously
      • Since there are multiple data pathways, one can fail without significantly affecting network performance
      • Expansion and modification can be done without affecting functioning nodes
    • Disadvantages
      • High costs due to multiple redundent connections
      • Setup, administration, and maintenance is difficult and expensiveBus Topology.png
  • Bus: The simplest of network design, a single cable (the bus) is used to connect various nodes on the network. Each node is connected to each and every other node on the network. A bus typically is made up of coaxial cable and requires proper termination in order to work correctly. Without proper termination, signals can potentially be disrupted, destabilizing the network. Good for small, LANs.
    • Advantages
      • Easy to setup and extend the network
      • Requires the least amount of cable
      • Low cost
      • Disadvantages
      • Limited cable length and number of nodes that can reside on the bus
      • If cable breaks, entire network is compromised.
      • Failure to properly terminate bus will lead to network failure
      • Difficult to troubleshoot when a node fails as the who network is affected
      • Maintenance costs get higher with time
      • Efficiency of bus design declines as nodes are added to the network
      • Not suitable for networks with heavy traffic
      • Low security since all nodes receive all signals

Geographic Scope

Another approach to effectively examine networks is based on their geographic scope. In other words, the relative distance between different devices on a network. This perspective yields PANs, LANs, CANs, MANs, and WANs.
Personal Area Networks (PANs):  PANs are small networks, usually made up of BlueTooth devices to quickly and easily connect devices within a few feet of each other.  We see examples such as BlueTooth headsets that link together with cell phones, BlueTooth Printers that wirelessly connect to computers to eliminate cables for printers. I have a little Google Nexus 7” tablet that connects via BlueTooth, eliminating the need to plug in any sort of cabling.
Local Area Networks (LANs): LANs are larger in scale.  Though different texts will provide similar but often distinct definitions, recognize that it can sometimes be difficult to determine precisely what represents a LAN versus a CAN versus a PAN, etc.  For the most part, a LAN exists on a single property where cables and wireless access points can be deployed without the need to lease lines from a carrier or obtain permission from anyone else.  LANs will consist of two or more nodes to usually no more than a few hundred and they will usually stretch no more than a few hundred feet.
Campus Area Networks (CANs): CANs are larger still.  Think of your local university or large corporate headquarters like Microsoft in Redmond Washington.  These are areas that have a need for more than a couple hundred nodes on the network.  Rather, they have a need to connect multiple LANs together yet all on contiguous property.  This LAN of LANs as described here, can be referred to as a CAN.
Metropolitan Area Networks (MANs): MANs are larger in scope yet again; this time spanning the size of a city or even multiple cities.  These are becoming more and more common.  Denton and Granbury Texas both have MANs that provide services not only to citizens, but also to workers who require network access in the field.
Wide Area Networks (WANs): WANs are the ultimate in scope.  They span larger regions still to potentially covering the planet.  The Internet is the ultimate example of a WAN though certainly not the only WAN that exists.  Many corporations lease lines in various cities around the world in order to establish their WAN. WANs often utilize public carriers such as AT&T, Sprint, and so on in order to capitalize on what they do well rather than have to become experts themselves at laying and maintaining lines around the world, a very expensive endeavor.

Relative Node Relationships

Another way to think about the relationship of computers within networks refers to their architecture.  The three most common types are peer-to-peer (P2P), client server, and host based.  
P2P networks carry a couple of different connotations.  They are related but they are distinct as well.  In a LAN environment, they represent a simple way to setup various nodes on an equal basis.  In such an environment, each computer on the LAN is on an equal footing.  Each computer manages its own files, its own security, and its own access.  Though simple to setup initially it has certain issues that arise when scaled to a larger environment.  This makes use of P2P networks appropriate for home and small office networks but as the size of the network increases, other types of networks should be utilized.  So, what are some of those scaling issues?  Well, security for one.  In a P2P environment, security is managed on each individual node.  In other words, a user created on one computer has various rights associated ONLY on that particular computer.  To have rights on another computer on the network, another account has to be created on the other computer.  Even if the user name and password are identical, they are actually separate accounts, with unique home directories, rights, and privileges.  Obviously, as more and more computers are added to the network, granting access to users creates a larger and larger problem.  Storage is another issue.  P2P environments encourage local storage of files.  This makes finding files, collaborating, and backing up of files more and more difficult as these files becomes spread across more and more computers.
P2P networks exist with another connotation though.  It also exists as a class of software that links computers across the Internet.  Popular programs like Kazaa allow users on disparate networks to share various files across the Internet in a free and open way. While this can be great in terms of sharing and collaboration, it is also fraught with violations of intellectual property and security concerns. Viruses and other types of malware of often contracted from this type of networking. Additionally, intellectual property theft often occurs as movies and songs are frequently shared illegally.
Client-server computer on the other hand does not consider computers on the network as peers though. Rather, one or more computers on such a network acts as a server, offering it services to clients, or machines that consume those services. Services may come in the form of sharing hardware such as a printer or sharing files such as in the case of a file server. Other common services include email, web, DNS, DHCP, etc. Some of these are terms that will be addressed later in the text. The idea is though that a client consumes, or uses these services. For example, a client requests a web page from a web server and consumes it by producing it on the user’s monitor. It should be noted that a server can be both a server and a client in that a single machine may both deliver and consumer services at the same time. Likewise, a server can serve more than one service at a time, something more common on smaller networks.
Like P2P networking having multiple connotations, so does the term “server”. In the context described above, server refers to software that enables the sharing of those services. Windows Server 2012 is an example of this type of software. Server may also refer to specific types of hardware. Typically, machines that have multiple processors, lots of RAM, redundant power supplies, and so on are often considered server. For example, I have a couple of Dell PowerEdge 2950s that I use for various things such as hosting website, hosting virtual machines, and so on. They are designed to be heavy duty machines, with lots of processing power, redundant systems, and durable.
The final model are host based architecture. The modern day version of this approach is a reincarnation of the old mainframe days in which a large, very powerful machine, performed all of the processing power. Terminal that hooked up to the mainframe were nothing more than a screen and keyboard that allowed input and output functionality to interact with the mainframe. Terminals were said the be “dumb” because they had no processing power other than the ability to generate characters as dictated by the user and the mainframe. Today’s version are sometimes referred to as “thin” clients. This is as opposed to “thick” clients which are fully functional computers, usually capable of performing many tasks, even when disconnected from the network. Thin clients on the other hand, rely more on a server to provide much of the processing power. Consider cloud based applications for example. These require very little in the way of computing power by the client machines in which we access these services. Rather, the processing is done in the cloud. In this case, the cloud is a server specifically serve up some particular service, such as Google Docs. If/when the network connection drops, a lot, if not all, of the functionality of the client disappears. So, this can be a concern for this design. At the same time, this design is desirable in terms of reducing hardware costs (thin clients usually are cheaper than thick) clients, it is desirable in terms of support costs (updates are performed once on the server and this update is reflected in all of the thin clients), and security (rather than having to secure each thin client, security measures can be centralized around the server, making it easier to secure).

Data Architecture View

Still another way to view networks is through an architecture that models how data travels from software applications, down through to the wire, radio signals, or light pulses to another computer where the reverse occurs. While there are several such models, they are usually based either on the Open Systems Interconnection (OSI) model or the Department of Defense (DoD) model. The reality is that there is a lot of similarity between the models and it is usually easy to see the relationships between the different model when discussing them.
The OSI model (Figure 1.2) is an old model that has been around for a long time. It consists of 7-layers: Application, Presentation, Session, Transport, Network, Data-link, and Physical from top to bottom. Perhaps an easier way to remember this is All People Seem To Need Data Processing. It’s little tidbits like that, that make it a little easier to remember.
Each layer works with the layers above and/or below it. This layer is not to be confused with software applications. The application layer for example, processes application layer protocols. These include protocols such as http, ftp, and smtp. So, when a user types a web address into the address bar of a browser and hits enter, the browser begins the process of retrieving a web page by converting the address to the http protocol.Figure 1.2.png
The presentation layer works very closely with the application layer and performs several very important functions. Those functions include data representation and encryption, something very important when accessing your bank account. Protocols that operate at this layer include MIME, XDR, etc.
The session layer two works very close with the application and presentation layer. The session layer is responsible for opening, managing, and closing connections. Some communications on the Internet require that a connection, similar to a physical circuit be established. That is what the session layer provides. Protocols commons at this layer include NetBIOS, PPTP, etc.
Once the data comes out of the three layers above, the transport layer encapsulates that information into a segment. The transport layer is responsible for taking larger messages and breaking them up into smaller, more manageable segments and making sure they make their way from node to node until they finally reach their destination. The transport layer is said to provide reliable communication in that if a segment is not received, a replacement segment is sent to replace it. Protocols common at this layer include TCP, UDP, etc.
The network layer is addressing. It aids in determining the most appropriate path(s) for packets to take. Like the transport layer, the network layer encapsulates transport segments into packets. Using an Internet protocol (IP) packet, network addresses are used to identify the most efficient path for packets to take. IP addresses are assigned automatically or manually by users or system administrators and are software configurable. Protocols at this layer include IP, IPSec, etc.
The data link layer operates within a local area network segment. The data link layer is composed of two sub-layers: media access control and the logical link control. Together, these sub-layers control access to and from the physical media such as the network cable, fiber optic, or radio waves and the network layer above. The network layers encapsulates packets into frames in order to place them on media. Typical protocols at this layer include Ethernet, Token Ring, etc.
The final layer is the physical layer. This is the physical media across which transmissions are made. More specifically, it consists of the specifications for the physical media. Examples of physical media include twisted pair cabling, fiver optic cabling, or radio waves. Specification dictates how many twists per inch for twisted pair cabling for example. They also dictates the various wireless standards such as 802.11ac for example. Typical protocols or standards at this level include 802.11, RS-232, etc.
When a user requests a web page at the application layer, that request makes its way down through each respective layer until it makes it to the wire, fiber optic cable, or radio waves. As the data makes its way to its ultimate destination, the data makes it way back up through the same layers in reverse. This time, each layer strips the frame, packet, and segment information away as it moves up the stack until ultimately, the data is presented to the application layer.
Where as the OSI model consists of seven layers, the Five-layer Internet model, sometimes referred to as the TCP/IP protocol suite, consists of five layers. The functionality of the application, presentation, and session layers in the OSI model are often handled by the software application chosen. For example, when a user uses a browser, such as Microsoft Internet Explorer, the browser handles application, presentation, and session layer protocols. Because much of networking and the Internet operate in this fashion, some models combine those three layers and consider them as a single layer. The Five-layer Internet model is one of those models. In the Five-layer Internet model, the application, presentation, and session layer functions are all compressed down into a single layer called the application layer. Following the application layer is the transport, network, data-link, and physical layers, each performing the same functions as previously described in the OSI model. This text uses the Five-layer Internet model as its theoretical foundation to discuss networking concepts.

Standards

Standards are an important concept in computing in general and in networking specifically. Without standards, interoperability would be limited at best and impossible at worst. A standard is defined at " used or accepted as normal or average" according to Google. Standards are important because contrary to what some may think, the Internet, and by extension, networks (at least conceptually) are not owned by any one person, company, our country/government. Because of the variety of "players" in the market that make so many hardware and software devices, the only thing that ensures that various devices will be able to work together is through the establishment and adherence to standards. Consider the example of a light bulb manufacturer who only manufactures light bulbs. Without a socket for the light bulb to be inserted, the light bulb is useless. If the manufacturer expects to sell their light bulb, they either have to produce one that fits within standardized sockets, or make some other sort of arrangement to take advantage of a specialized socket.
As it relates to standards, there are three essential issues: the standards making process, various standards making bodies, and commonly used standards currently being used.

The Standards Making Process

There are essentially two types of standards: formal standards (de juro) and informal standards (de facto). They way in which each comes about is significantly different. Formal standards are standards that are usually developed over several years with the backing of one or more governments and/or industry trade groups. Negotiations occur over a considerable time designed to address various concerns of stakeholders. Often times, this delay in bringing the standard to market can result in the standard being of less value, particularly in the technology sector due to the pace of change. A prime example of a formal standard is the 802.11n wireless networking standard. Over a period of several years, various aspects of the wireless access protocol were hammered out by various stakeholders to address issues such as speed, security, operational distance and so on. At the same time, due to the length of time for the standard to come to market, several manufacturers actually started producing equipment claiming to be compliant with the standard before it even became official. The demands of the market significantly outpaced the speed with with the standards making bodies (discussed in a moment) could move to create the official standard.
The process of developing formal standards is fairly straight forward. Essentially, there are three steps: specification, identification, and acceptance. Specification refers to identifying relevant and appropriate jargon and identifying the problem(s) to be addressed. Though various standards making bodies may differ in their process, this step may include a Request For Proposal (RFP) in order to solicit a more detailed problem definition. This is often done by a governmental agency or a software/hardware manufacturer. Identification refers to identifying various choices to address the problem(s) and selecting the most appropriate choice. Often, a Request For Comment (RFC) is released in order to identify potential issues in a proposed standard before it is finalized. Depending on the response to the RFC, subsequent RFCs may be issued until most if not all issues have been addressed. Finally, acceptance refers to actually defining the solution and convincing industry leaders to adopt the standard.
An unofficial standard on the other hand is one that simply occurs as a matter of happenstance. The Microsoft Windows operating system and office suite have become unofficial standards in most organizations. These types of standards often become unofficial standards due to compatibility, ease of use, and sometimes due to prices. However, there is a closed nature to these types of standards that limits competition by limiting the ability of others to compete and innovate with these proprietary technologies.

Standards Making Bodies

International Organization for Standardization

The International Organization for Standardization (ISO) is the originator of the OSI model. As an international organization, they are responsible for setting and promoting industrial and commercial standards. Funded by organizations that manage specific projects, subscriptions from members, and through the sale of standards, the ISO sets standards in a wide range of areas. These include setting standards for information systems security practices (ISO 17799), standards for vehicle roof load carriers (ISO 111154), etc. As of this writing, there are 162 member countries out of 205 total countries in the world. Membership consists of three classifications: Member (has voting rights), Correspondent (informed of ISO standards as the progress but no voting right), and Subscriber (pay reduced fees but can follow standards development).

International Telecommunications Union (ITU)

An agency of the United Nations, the ITU is responsible for setting information and communication technology standards internationally. They coordinate use of radio frequencies among nations, promote cooperation among nations in determining satellite orbits, and work to improve telecommunications infrastructures worldwide.

American National Standards Institute (ANSI)

ANSI is an American private, non-profit organization that helps other standards organizations, government agencies, consumer groups, companies, and others reach consensus among standards for products, services, processes, systems, and personnel in the United States. The goal is to ensure terms, definitions, and product tests are consistent.

Institute of Electrical and Electronics Engineers (IEEE)

A professional organization headquartered in New York City, IEEE operates in multiple domains including standards development. IEEE develops standards for various industries including power and energy, biomedical and health care, information technology, telecommunications, transportation, nanotechnology, information assurance, etc.

Internet Engineering Task Force (IETF)

The IETF is an open standards organization with no membership requirements that focuses on standards associated with the Internet.

Commonly Used Standards

There are far too many computing standards to discuss in a networking text. Many books have been written on the nuances of individual standards. But, there are some standards that seem to crop up more than others. For example, 802.11 suite of protocols represent wireless technologies that many of us are familiar of in our homes, at work, and in so many commercial places. The standard, developed by the IEEE, dictates power, range, and so forth so various manufacturers can make sure they are able to create devices that will be able to communicate together. Such standards dictate the power that such devices can transmit and receive, transmission coding schemes, encryption schemes, etc. Without strict adherence to standards, manufacturers would be making devices that you be unable to communicate with each other.

Virtualization

Virtualization means different things to different people. There are several different contexts relevant to computing; network, storage, and software are some of the more prominent virtualization computing areas. Virtualization helps data centers to better utilize the physical space within a facility. Software virtualization allows administrators to quickly "distribute" software applications to users. Storage virtualization allows users to store and then access from anywhere files at their convenience. The goal behind virtualization is to go green; a term you have probably heard in numerous contexts. But, it is more than just saving money and reducing carbon footprints. It can also serve as a way to more effectively create and manage networks. Network virtualization allows network designers to create physically very simple network designs, reducing development costs, yet very complex "virtual" or logical networks that allow for segmentation, address security concerns, etc. Virtualization is something that will be discussed in further detail in this text. It is a field that will continue to grow in importance and it would serve you well to learn all that you can about virtualization as a way to distinguish you from your peers.

Convergence and Future Trends

Like virtualization, convergence can mean different things to different people. This stems from the different types of convergence trends that exist.Technological convergence refers to related devices being combined in order to increase the utility and create a synergistic effect. For example, Apple's combination of the cellular phone, portable music player, and its media service, iTunes, converged to create a product that set a very high bar for competitors. Telecommunications convergence refers to combining multiple communications services into a single network. In general, convergence efforts are undertaken because the provide costs savings and/or they offer a synergistic effect where by combining multiple technologies, the resulting product or service is more valuable that all the individual products or services individually.Figure 1.3.png
Trends such a convergence are important to monitor to be able to anticipate changes in the market place. Investments in network systems are by their very nature, expensive and tend to have a long-term planning and use horizon. One way to anticipate future trends and upcoming technologies is to read various trade and industry publications such as Network World, Information Week, and so on. Another useful tool is to keep an eye on Gartner's Hype Cycle. Gartner's Hype Cycle is a cycle that predicts various technologies and how far out those technologies are from becoming reality. Understanding current events and being able to successfully anticipate future trends helps network designers to design and implement networks that are more scalable and able to grow and contrast as the needs of the network change over time

Becoming a Networking Professional

Becoming a networking professional can become a rewarding choice should you wish to pursue it. Having said that, it can also be a difficult field in which to thrive. So, where to begin? Well, this is a good start. Get your formal education. If nothing else, completing your formal education illustrates to employers that you can start a large project and follow it to completion. However, when combined with a strategic eye on your major, your minor, various certifications, and experience, your formal education can not only help get you a job in field, it can also help you to rapidly advance. But, to really set yourself up for advancement in whatever organization you end up with, you need to develop your soft skills too. What are soft skills? Soft skills include dealing with people, good oral and written communications, strong work ethics, the ability to work with others, leadership skills, etc. To excel in networking specifically and the computer related field in general, you need strong technical and strong soft skills.

Education

There are various collegiate programs available to help you learn about information systems in general and networking specifically. These programs include associate degrees from two year institutions to four year degrees at universities, to graduate degrees at various universities. Majors include things like Business Computer Information Systems, Information Technology, Information Systems, and so on. These types of majors tend to be a little more general purpose and combine both technical and managerial skills, something useful if you are interested in advancing in whatever organization for which you ultimately start working. Alternatively, for those more technically inclined and with less vertical ambition, computer science might be a good alternative. Regardless of the specific major you choose, I believe that choosing a business minor is a wise choice. If you have the option of choosing a specific discipline for your minor, consider your future interests in terms of advancing within an organization. If you are interested in security, I would probably choose something like accounting or finance as a minor. If you would like to become CIO or CTO, consider management. If you are interested in web site development and coding, consider marketing. The point is, start to think strategically about your education and how it can help you in the future.

Certifications

The value of computer certifications are debated in academic and in industry. I am of the opinion that anything that helps to set you apart from your peers is a good thing. This opinion is shared by several of us here at Tarleton State University. Why? certifications help certification holders to obtain higher salaries, better opportunities, peer respect, and in some instances, access to better support through sponsoring organizations. But, there are numerous certifications in the field of computing and in networking specifically. Which one is right for you? Where do you start? There are several organizations that offer certifications in the area of networking. Below, several of the more popular certifications and their sponsors are discussed.
  • LabSim/Test Out: Test Out, offered by LabSim and utilized in several courses here at Tarleton offers online training. The training includes various video lectures, demonstrations, quizzes, and simulations. Upon successful completion of the course and corresponding exam, the student is awarded the Network Pro certification, an entry level certification which is closely aligned with CompTia's Network+ certification discussed next.
  • CompTIA: CompTIA offers numerous entry level certifications in various fields. They have recently started to expand their offerings to include more advanced certifications to illustrate more advanced skill sets of their certification recipients. CompTia offers their Network+ certification which is an entry level certification in the field of networking. It is a widely recognized certification and I recommend it due to this. Often, it is taken shortly after A+ certification, another entry level certification geared to PC repair.
  • Cisco: Cisco is the quintessential networking company. They manufacture various networking components such as switches and routers. Unlike the previous two certifications discussed, Cisco offers both entry level as well as more advanced certifications. The Cisco Certified Entry Networking Technician (CCENT) is their entry level offering signifying an entry level networking skill set. The Cisco Certified Network Associate (CCNA) is a more familiar certification, considered an entry level certification by many, yet it is recommended that you have 1-3 years of experience before sitting for the exam. Still more advanced Cisco certifications include the Cisco Certified Network Professional (CCNP) and the Cisco Certified Internetwork Enginner (CCIE).Figure 1.4.png

Experience

There is no substitute for experience. Though a degree and various certifications will serve you well, it is often difficult to get your foot in the door without a little experience. But, how do you gain experience, without a job? If you are currently taking my class, great. If you are taking a similar class at another institution, first, thank you for tuning in, but secondly, like my own students, you are probably already working, working on campus or working professionally. However, it is also likely that you have little if any experience in field. This must change. Contact your Career Services office and your department to inquire about lab positions and various computer support positions on campus. Search various employment sites on the Internet such as Monster.com. Develop your LinkedIn profile and treat it like an online resume. Retrieve classified ads for newspapers in the market in which you are looking for a job. Attend career fairs and network. Often times, developing your contacts within your area of interests helps you to find out about positions that may not have been announced yet. If you have to, consider hiring the services of a headhunter. Don't be too proud. Many IT professionals start out in help desk positions. Just remember that this is an opportunity to get your foot in the door. As you gain experience, continue taking your classes while considering the applicability of what you are learning to your job function.
If you are fortunate enough to already have a job outside of school, great! If it is an IT related position, even better. But if it is not, use your time wisely. Finish your formal education. Pursue your certifications vigorously. Start with CompTia's A+ certification followed by CompTia's Network+ certification. Pursue Cisco's CCENT certification. Continue adding on until you sufficiently distinguish yourself from your peers such that employers must hire you, until employers recognize your new found skill-set and you are able to slide into a new position more closely aligned with your education.
If you are still struggling to find a position in field, consider joining some of the professional organizations that exist. There are many of them out there, each with their own specialization. Often times, they offer their own job boards and in some cases, employment opportunities. Though there are many more, some of the more prominent IT related professional organizations include:
In addition to joining these groups, if you are still in school, consider checking your information systems or computer science department's student organizations. They often have local chapters that help you to develop some of the important skills described above but also help students find internship opportunities and prepare for job fairs.

Implications for Management

Management must be aware of how networking is used and supported within their organization in order to effectively invest and manage the organization's computing resources. In order to be informed, key personnel such as Network Managers, Chief Information Officers and so on must use tools such as Gartner's Hype Cycle, reading various trade publications, etc. What are some of those technologies and applications? Let's explore some of them. As of this writing, some of the prominent technologies currently coming online are things like:
  • Biometric Authentication
  • Bring Your Own Device (BYOD)
  • Cloud Computing
  • Hosted Virtual Desktops
  • Virtual Worlds
These are just some of the technologies peaking over the horizon. What does it mean for management? It means they have to consider purchasing different hardware to be able to perform biometric authentication, to develop policies for handling data and security with BYOD devices, to address storage and use of services in the cloud, etc. As network designers, we must be prepared for demands on bandwidth, the need to design scalable networks, and the need to provide services increasingly to mobile stakeholders.

Summary

This chapter introduced the concept of networking by first defining it and then discussing why networks are useful. Types of networks were discussed in terms of geographic scope and processing location. The chapter then moved on to discuss network models and the concepts of logical and physical topologies. The networking model discussion continued by talking about how data travels up and down a protocol stack using first the OSI model as a tool and then the Five-Layer Internet model to do the same. Then the chapter moved on to discuss the importance of standards, the standards making process, and some of the more important standards making bodies.
At that point, the chapter changed focus a bit and began to focus on perhaps more tangible aspects of networking. For example, the concept of virtualization was discussed as was the concept of convergence and future trends. A discussion of how to become a networking professional stressed the importance of education, training, and experience. Finally, the chapter concluded with a discussion of the implications for management.

Key Terms

  • Campus Area Network (CAN): Larger in scope than a LAN, a CAN covers multiple buildings. It is not larger enough to span a city nor does it usually require obtaining permits to run cables nor does it usually require leasing lines to complete connections. Often times, such network will require multiple routers and multiple LANs are linked together.
  • Client/Server Architecture: An explanation of the relationship between two or more nodes where some computers provide services such as web pages, email, or DNS services to client who use those services.
  • Convergence: Refers to different technologies coming together in order to provide a more rich environment. An example is the cell phone in which phone services are combined with the functionality of a small computer to provide Internet and email access among other things.
  • Five-Layer Internet Model: A hierarchical model that illustrates how data traverses a network from client to server and back.
  • Host Based Systems: An explanation of the relationship between two or more nodes where one very powerful computer, often a mainframe computer, performs all of the processing power and each client represents a dumb terminal where essentially only screen shots are provided to users. A mouse and a keyboard may be used for input and the monitor for output.
  • Local Area Network (LAN): Geographically a small network.Can be a few computers up to several dozen. Usually located on a single floor of a building or in a single building, not requiring any sort of a router unless the LAN is to be connected to another network.
  • Metropolitan Area Network (MAN): Generally covering a city or several cities that are close to each other. Services are provided to city workers and sometimes services are offered to citizens.
  • Networking: The connection of two or more computers, wired and/or wireless, in order to be able to communicate, share hardware, files, etc.
  • Network Pro: An entry level certification provided by LabSim. It is very similar to the Network+ certification.
  • Network+: An entry level certification offered by CompTIA. It is more widely recognized than the Network Pro certification.
  • OSI Model: Like the Five-Layer Internet Model, the OSI model described the flow of data from the user through the protocol stack on the client machine across the wire and then back up through the protocol stack of the server.
  • Peer-to-Peer Network (P2P): A network in which all node are equivalent, each serving as both a client and an server. Has a connotation at the LAN level as well as applications that allow nodes across the Internet to share files.
  • Personal Area Network (PAN): Smaller that a LAN, a PAN usually covers only a few feet. Sometimes referred to as a piconet, it is usually a Bluetooth network that is designed to eliminate the use of cables. Useful for connecting wireless printers to computers, phones to wireless headsets, and so on.
  • Sneakernet: A network requiring the physical transport of media from one computer to another computer in order to share files.
  • Topology: A description of the physical or logical relationship between things. With respect to computer networking, it is a description of the physical or logical relationship of data moving through the network.
  • Virtualization: Virtualization is an approach to take something physical and represent it using software. Examples include virtual networks and virtual machines which manifest themselves as multiple operating systems operating on a single, high powered physical machine.
  • Wide Area Network (WAN): Larger in scope than a MAN, a WAN covers a large geographic region which may cross regions and political borders. The largest WAN in existence is the Internet.
  • Standards: Standards are either formally adopted or informally followed conventions that allow for uniformity in hardware and software to insure interoperability.

Review Questions

  1. Name and describe the layers of the OSI model.
  2. Discuss three different types of modeling techniques used to describe networks.
  3. What are some of the steps one can take to prepare to enter the workforce in the field of networking?

Bibliography

The History of Computing Project. (2012, November 12). Retrieved from The History of Computing Project: http://www.thocp.net/index.html
Isaacson, W. (2011). Steve Jobs. New York: Simon & Schuster.

McCartney, S. (1999). ENIAC: The Triumphs and Tragedies of the World's First Computer. New York: Walker & Company.