Saturday, December 12, 2009

How Virtual Private Networks Work

The world has changed a lot in the last couple of decades. Instead of simply dealing with local or regional concerns, many businesses now have to think about global markets and logistics. Many companies have facilities spread out across the country or around the world, and there is one thing that all of them need: A way to maintain fast, secure and reliable communications wherever their offices are.

Until fairly recently, this has meant the use of leased lines to maintain a wide area network (WAN). Leased lines, ranging from ISDN (integrated services digital network, 128 Kbps) to OC3 (Optical Carrier-3, 155 Mbps) fiber, provided a company with a way to expand its private network beyond its immediate geographic area. A WAN had obvious advantages over a public network like the Internet when it came to reliability, performance and security. But maintaining a WAN, particularly when using leased lines, can become quite expensive and often rises in cost as the distance between the offices increases.
As the popularity of the Internet grew, businesses turned to it as a means of extending their own networks. First came intranets, which are password-protected sites designed for use only by company employees. Now, many companies are creating their own VPN (virtual private network) to accommodate the needs of remote employees and distant offices.

­B­asically, a VPN is a private network that uses a public network (usually the Internet) to connect remote sites or users together. Instead of using a dedicated, real-world connection such as leased line, a VPN uses "virtual" connections routed through the Internet from the company's private network to the remote site or employee. In this article, you will gain a fundamental understanding of VPNs, and learn about basic VPN components, technologies, tunneling and security.

What Makes a VPN?

A well-designed VPN can greatly benefit a company. For example, it can:

* Extend geographic connectivity
* Improve security
* Reduce operational costs versus traditional WAN
* Reduce transit time and transportation costs for remote users
* Improve productivity
* Simplify network topology
* Provide global networking opportunities
* Provide telecommuter support
* Provide broadband networking compatibility
* Provide faster ROI (return on investment) than traditional WAN

What features are needed in a well-designed VPN? It should incorporate:

* Security
* Reliability
* Scalability
* Network management
* Policy management

Remote-Access VPN

­ Ther­e are two common types of VPN. Remote-access, also called a virtual private dial-up network (VPDN), is a user-to-LAN connection used by a company that has employees who need to connect to the private network from various remote locations. Typically, a corporation that wishes to set up a large remote-access VPN will outsource to an enterprise service provider (ESP). The ESP sets up a network access server (NAS) and provides the remote users with desktop client software for their computers. The telecommuters can then dial a toll-free number to reach the NAS and use their VPN client software to access the corporate network.

A good example of a company that needs a remote-access VPN would be a large firm with hundreds of sales people in the field. Remote-access VPNs permit secure, encrypted connections between a company's private network and remote users through a third-party service provider.

Site-to-Site VPN

Through the use of dedicated equipment and large-scale encryption, a company can connect multiple fixed sites over a public network such as the Internet. Site-to-site VPNs can be one of two types:


* Intranet-based
-
If a company has one or more remote locations that they wish to join in a single private network, they can create an intranet VPN to connect LAN to LAN.

* Extranet-based -
When a company has a close relationship with another company (for example, a partner, supplier or customer), they can build an extranet VPN that connects LAN to LAN, and that allows all of the various companies to work in a shared environment.

Analogy: Each LAN is an Island


­ Imagine that you live on an island in a huge ocean. There are thousands of other islands all around you, some very close and others farther away. The normal wa­y to travel is to take a ferry from your island to whichever island you wish to visit. Of course, traveling on a ferry means that you have almost no privacy. Anything you do can be seen by someone else.

Let's say that each island represents a private LAN and the ocean is the Internet. Traveling by ferry is like connecting to a Web server or other device through the Internet. You have no control over the wires and routers that make up the Internet, just like you have no control over the other people on the ferry. This leaves you susceptible to security issues if you are trying to connect between two private networks using a public resource.

Continuing with our analogy, your island decides to build a bridge to another island so that there is easier, more secure and direct way for people to travel between the two. It is expensive to build and maintain the bridge, even though the island you are connecting with is very close. But the need for a reliable, secure path is so great that you do it anyway. Your island would like to connect to a second island that is much farther away but decides that the cost are simply too much to bear.

This is very much like having a leased line. The bridges (leased lines) are separate from the ocean (Internet), yet are able to connect the islands (LANs). Many companies have chosen this route because of the need for security and reliability in connecting their remote offices. However, if the offices are very far apart, the cost can be prohibitively high -- just like trying to build a bridge that spans a great distance.

So how does VPN fit in? Using our analogy, we could give each inhabitant of our islands a small submarine. Let's assume that your submarine has some amazing properties:

* It's fast.
* It's easy to take with you wherever you go.
* It's able to completely hide you from any other boats or submarines.
* It's dependable.
* It costs little to add additional submarines to your fleet once the first is purchased.
Although they are traveling in the ocean along with other traffic, the inhabitants of our two islands could travel back and forth whenever they wanted to with privacy and security. That's essentially how a VPN works. Each remote member of your network can communicate in a secure and reliable manner using the Internet as the medium to connect to the private LAN. A VPN can grow to accommodate more users and different locations much easier than a leased line. In fact, scalability is a major advantage that VPNs have over typical leased lines. Unlike with leased lines, where the cost increases in proportion to the distances involved, the geographic locations of each office matter little in the creation of a VPN.

VPN Security: Firewalls

A well-des­igned VPN uses several methods for keeping your connection and data secure:

* Firewalls
* Encryption
* IPSec
* AAA Server

­ In the following sections, we'll discuss each of these security methods. We'll start with the firewall.

A firewall provides a strong barrier between your private network and the Internet. You can set firewalls to restrict the number of open ports, what type of packets are passed through and which protocols are allowed through. Some VPN products, such as Cisco's 1700 routers, can be upgraded to include firewall capabilities by running the appropriate Cisco IOS on them. You should already have a good firewall in place before you implement a VPN, but a firewall can also be used to terminate the VPN sessions.

VPN Security: Encryption


­ Encry­ption is the process of taking all the data that one computer is sending to another and encoding it into a form that only the other computer will be able to decode. Most computer encryption systems belong in one of two categories:

* Symmetric-key encryption
* Public-key encryption

In symmetric-key encryption, each computer has a secret key (code) that it can use to encrypt a packet of information before it is sent over the network to another computer. Symmetric-key requires that you know which computers will be talking to each other so you can install the key on each one. Symmetric-key encryption is essentially the same as a secret code that each of the two computers must know in order to decode the information. The code provides the key to decoding the message. Think of it like this: You create a coded message to send to a friend in which each letter is substituted with the letter that is two down from it in the alphabet. So "A" becomes "C," and "B" becomes "D". You have already told a trusted friend that the code is "Shift by 2". Your friend gets the message and decodes it. Anyone else who sees the message will see only nonsense.

Public-key encryption uses a combination of a private key and a public key. The private key is known only to your computer, while the public key is given by your computer to any computer that wants to communicate securely with it. To decode an encrypted message, a computer must use the public key, provided by the originating computer, and its own private key. A very popular public-key encryption utility is called Pretty Good Privacy (PGP), which allows you to encrypt almost anything

VPN Security: IPSec

­ Internet Protocol Security Protocol (IPSec) provides enhanced security feat­ures such as better encryption algorithms and more comprehensive authentication.
IPSec has two encryption modes: tunnel and transport. Tunnel encrypts the header and the payload of each packet while transport only encrypts the payload. Only systems that are IPSec compliant can take advantage of this protocol. Also, all devices must use a common key and the firewalls of each network must have very similar security policies set up. IPSec can encrypt data between various devices, such as:

* Router to router
* Firewall to router
* PC to router
* PC to server

VPN Security: AAA Servers


­ AAA (authentication, authorization and accounting) servers are used for more secure access in a remote-access VPN environment. When a request to establish a session c­omes in from a dial-up client, the request is proxied to the AAA server. AAA then checks the following:

* Who you are (authentication)
* What you are allowed to do (authorization)
* What you actually do (accounting)

The accounting information is especially useful for tracking client use for security auditing, billing or reporting purposes.

VPN Technologies

­ Depe­nding on the type of VPN (remote-access or site-to-site), you will need to put in place certain components to build your VPN. These might include:

* Desktop software client for each remote user
* Dedicated hardware such as a VPN concentrator or secure PIX firewall
* Dedicated VPN server for dial-up services
* NAS (network access server) used by service provider for remote-user VPN access
* VPN network and policy-management center

Because there is no widely accepted standard for implementing a VPN, many companies have developed turn-key solutions on their own. In the next few sections, we'll discuss some of the solutions offered by Cisco, one of the most prevelant networking technology companies.

VPN Concentrator

Incorporating the most advanced encryption and authentication techniques available­, Cisco VPN concentrators are built specifically for creating a remote-access VPN. They provide high availability, high performance and scalability and include components, called scalable encryption processing (SEP) modules, that enable users to easily increase capacity and throughput. The concentrators are offered in models suitable for everything from small businesses with up to 100 remote-access users to large organizations with up to 10,000 simultaneous remote users.
VPN-Optimized Router
Cisco's VPN-optimized routers provide scalability, routing, security and QoS (quality of se­rvice). Based on the Cisco IOS (Internet Operating System) software, there is a router suitable for every situation, from small-office/home-office (SOHO) access through central-site VPN aggregation, to large-scale enterprise needs

Cisco Secure PIX Firewall
An ­amazing piece of technology, the PIX (private Internet exchange) firewall combines dynamic network address translation, proxy server, packet filtration, firewall and VPN capabilities in a single piece of hardware.

Instead of using Cisco IOS, this device has a highly streamlined OS that trades the ability to handle a variety of protocols for extreme robustness and performance by focusing on IP.

Tunneling

­ Most ­VPNs rely on tunneling to create a private network that reaches across the Internet. Essentially, tunneling is the process of placing an entire packet within another packet and sending it over a network. The protocol of the outer packet is understood by the network and both points, called tunnel interfaces, where the packet enters and exits the network.

Tunneling requires three different protocols:

* Carrier protocol - The protocol used by the network that the information is traveling over
* Encapsulating protocol - The protocol (GRE, IPSec, L2F, PPTP, L2TP) that is wrapped around the original data
* Passenger protocol - The original data (IPX, NetBeui, IP) being carried

Tunneling has amazing implications for VPNs. For example, you can place a packet that uses a protocol not supported on the Internet (such as NetBeui) inside an IP packet and send it safely over the Internet. Or you could put a packet that uses a private (non-routable) IP address inside a packet that uses a globally unique IP address to extend a private network over the Internet.

Tunneling: Site-to-Site

­ In a site-to-site VPN, GRE (generic routing encapsulation) is normally the e­ncapsulating protocol that provides the framework for how to package the passenger protocol for transport over the carrier protocol, which is typically IP-based. This includes information on what type of packet you are encapsulating and information about the connection between the client and server. Instead of GRE, IPSec in tunnel mode is sometimes used as the encapsulating protocol. IPSec works well on both remote-access and site-to-site VPNs. IPSec must be supported at both tunnel interfaces to use.

Tunneling: Remote-Access

­ In a r­emote-access VPN, tunneling normally takes place using PPP. Part of the TCP/IP stack, PPP is the carrier for other IP protocols when communicating over the network between the host computer and a remote system. Remote-access VPN tunneling relies on PPP.

Each of the protocols listed below were built using the basic structure of PPP and are used by remote-access VPNs.

* L2F (Layer 2 Forwarding) - Developed by Cisco, L2F will use any authentication scheme supported by PPP.

* PPTP (Point-to-Point Tunneling Protocol) - PPTP was created by the PPTP Forum, a consortium which includes US Robotics, Microsoft, 3COM, Ascend and ECI Telematics. PPTP supports 40-bit and 128-bit encryption and will use any authentication scheme supported by PPP.

* L2TP (Layer 2 Tunneling Protocol) - L2TP is the product of a partnership between the members of the PPTP Forum, Cisco and the IETF (Internet Engineering Task Force). Combining features of both PPTP and L2F, L2TP also fully supports IPSec.

L2TP can be used as a tunneling protocol for site-to-site VPNs as well as remote-access VPNs. In fact, L2TP can create a tunnel between:

* Client and router
* NAS and router
* Router and router

Think of tunneling as having a computer delivered to you by UPS. The vendor packs the computer (passenger protocol) into a box (encapsulating protocol) which is then put on a UPS truck (carrier protocol) at the vendor's warehouse (entry tunnel interface). The truck (carrier protocol) travels over the highways (Internet) to your home (exit tunnel interface) and delivers the computer. You open the box (encapsulating protocol) and remove the computer (passenger protocol). Tunneling is just that simple!

As you can see, VPNs are a great way for a company to keep its employees and partners connected no matter where they are.

Friday, December 11, 2009

How Computer Viruses Work

Strange as it may sound, the computer virus is something of an Information Age marvel. On one hand, viruses show us how vulnerable we are -- a properly engineered virus can have a devastating effect, disrupting productivity and doing billions of dollars in damages. On the other hand, they show us how sophisticated and interconnected human beings have become.

For example, experts estimate that the Mydoom worm infected approximately a quarter-million computers in a single day in January 2004. Back in March 1999, the Melissa virus was so powerful that it forced Microsoft and a number of other very large companies to completely turn off their e-mail systems until the virus could be contained. The ILOVEYOU virus in 2000 had a similarly devastating effect. In January 2007, a worm called Storm appeared -- by October, experts believed up to 50 million computers were infected. That's pretty impressive when you consider that many viruses are incredibly simple
When you listen to the news, you hear about many different forms of electronic infection. The most common are:

* Viruses - A virus is a small piece of software that piggybacks on real programs. For example, a virus might attach itself to a program such as a spreadsheet program. Each time the spreadsheet program runs, the virus runs, too, and it has the chance to reproduce (by attaching to other programs) or wreak havoc.
* E-mail viruses - An e-mail virus travels as an attachment to e-mail messages, and usually replicates itself by automatically mailing itself to dozens of people in the victim's e-mail address book. Some e-mail viruses don't even require a double-click -- they launch when you view the infected message in the preview pane of your e-mail software [source: Johnson].
* Trojan horses - A Trojan horse is simply a computer program. The program claims to do one thing (it may claim to be a game) but instead does damage when you run it (it may erase your hard disk). Trojan horses have no way to replicate automatically.
* Worms - A worm is a small piece of software that uses computer networks and security holes to replicate itself. A copy of the worm scans the network for another machine that has a specific security hole. It copies itself to the new machine using the security hole, and then starts replicating from there, as well.

Virus Origins

Computer viruses are called viruses because they share some of the traits of biological viruses. A computer virus passes from computer to computer like a biological virus passes from person to person.

Unlike a cell, a virus has no way to reproduce by itself. Instead, a biological virus must inject its DNA into a cell. The viral DNA then uses the cell's existing machinery to reproduce itself. In some cases, the cell fills with new viral particles until it bursts, releasing the virus. In other cases, the new virus particles bud off the cell one at a time, and the cell remains alive
A computer virus shares some of these traits. A computer virus must piggyback on top of some other program or document in order to launch. Once it is running, it can infect other programs or documents. Obviously, the analogy between computer and biological viruses stretches things a bit, but there are enough similarities that the name sticks.

People write computer viruses. A person has to write the code, test it to make sure it spreads properly and then release it. A person also designs the virus's attack phase, whether it's a silly message or the destruction of a hard disk. Why do they do it?

There are at least three reasons. The first is the same psychology that drives vandals and arsonists. Why would someone want to break a window on someone's car, paint signs on buildings or burn down a beautiful forest? For some people, that seems to be a thrill. If that sort of person knows computer programming, then he or she may funnel energy into the creation of destructive viruses.

The second reason has to do with the thrill of watching things blow up. Some people have a fascination with things like explosions and car wrecks. When you were growing up, there might have been a kid in your neighborhood who learned how to make gunpowder. And that kid probably built bigger and bigger bombs until he either got bored or did some serious damage to himself. Creating a virus is a little like that -- it creates a bomb inside a computer, and the more computers that get infected the more "fun" the explosion.

The third reason involves bragging rights, or the thrill of doing it. Sort of like Mount Everest -- the mountain is there, so someone is compelled to climb it. If you are a certain type of programmer who sees a security hole that could be exploited, you might simply be compelled to exploit the hole yourself before someone else beats you to it.

Of course, most virus creators seem to miss the point that they cause real damage to real people with their creations. Destroying everything on a person's hard disk is real damage. Forcing a large company to waste thousands of hours cleaning up after a virus is real damage. Even a silly message is real damage because someone has to waste time getting rid of it. For this reason, the legal system is getting much harsher in punishing the people who create viruses

Virus History

Traditional computer viruses were first widely seen in the late 1980s, and they came about because of several factors. The first factor was the spread of personal computers (PCs). Prior to the 1980s, home computers were nearly non-existent or they were toys. Real computers were rare, and they were locked away for use by "experts." During the 1980s, real computers started to spread to businesses and homes because of the popularity of the IBM PC (released in 1982) and the Apple Macintosh (released in 1984). By the late 1980s, PCs were widespread in businesses, homes and college campuses.

The second factor was the use of computer bulletin boards. People could dial up a bulletin board with a modem and download programs of all types. Games were extremely popular, and so were simple word processors, spreadsheets and other productivity software. Bulletin boards led to the precursor of the virus known as the Trojan horse. A Trojan horse is a program with a cool-sounding name and description. So you download it. When you run the program, however, it does something uncool like erasing your disk. You think you are getting a neat game, but it wipes out your system. Trojan horses only hit a small number of people because they are quickly discovered, the infected programs are removed and word of the danger spreads among users.
The third factor that led to the creation of viruses was the floppy disk. In the 1980s, programs were small, and you could fit the entire operating system, a few programs and some documents onto a floppy disk or two. Many computers did not have hard disks, so when you turned on your machine it would load the operating system and everything else from the floppy disk. Virus authors took advantage of this to create the first self-replicating programs.

Early viruses were pieces of code attached to a common program like a popular game or a popular word processor. A person might download an infected game from a bulletin board and run it. A virus like this is a small piece of code embedded in a larger, legitimate program. When the user runs the legitimate program, the virus loads itself into memory­ and looks around to see if it can find any other programs on the disk. If it can find one, it modifies the program to add the virus's code into the program. Then the virus launches the "real program." The user really has no way to know that the virus ever ran. Unfortunately, the virus has now reproduced itself, so two programs are infected. The next time the user launches either of those programs, they infect other programs, and the cycle continues.

If one of the infected programs is given to another person on a floppy disk, or if it is uploaded to a bulletin board, then other programs get infected. This is how the virus spreads.

The spreading part is the infection phase of the virus. Viruses wouldn't be so violently despised if all they did was replicate themselves. Most viruses also have a destructive attack phase where they do damage. Some sort of trigger will activate the attack phase, and the virus will then do something -- anything from printing a silly message on the screen to erasing all of your data. The trigger might be a specific date, the number of times the virus has been replicated or something similar.

Virus Evolution

Other Threats

Viruses and worms get a lot of publicity, but they aren't the only threats to your computer's health. Malware is just another name for software that has an evil intent. Here are some common types of malware and what they might do to your infected computer:

* Adware puts ads up on your screen.
* Spyware collects personal information about you, like your passwords or other information you type into your computer.
* Hijackers turn your machine into a zombie computer.
* Dialers force your computer to make phone calls. For example, one might call toll 900-numbers and run up your phone bill, while boosting revenue for the owners of the 900-numbers. [source: Baratz and McLaughlin]

As virus creators became more sophisticated, they learned new tricks. One important trick was the ability to load viruses into memory so they could keep running in the background as long as the computer remained on. This gave viruses a much more effective way to replicate themselves. Another trick was the ability to infect the boot sector on floppy disks and hard disks. The boot sector is a small program that is the first part of the operating system that the computer loads. It contains a tiny program that tells the computer how to load the rest of the operating system. By putting its code in the boot sector, a virus can guarantee it is executed. It can load itself into memory immediately and run whenever the computer is on. Boot sector viruses can infect the boot sector of any floppy disk inserted in the machine, and on college campuses, where lots of people share machines, they could spread like wildfire.

In general, neither executable nor boot sector viruses are very threatening any longer. The first reason for the decline has been the huge size of today's programs. Nearly every program you buy today comes on a compact disc. Compact discs (CDs) cannot be modified, and that makes viral infection of a CD unlikely, unless the manufacturer permits a virus to be burned onto the CD during production. The programs are so big that the only easy way to move them around is to buy the CD. People certainly can't carry applications around on floppy disks like they did in the 1980s, when floppies full of programs were traded like baseball cards. Boot sector viruses have also declined because operating systems now protect the boot sector.

Infection from boot sector viruses and executable viruses is still possible. Even so, it is a lot harder, and these viruses don't spread nearly as quickly as they once did. Call it "shrinking habitat," if you want to use a biological analogy. The environment of floppy disks, small programs and weak operating systems made these viruses possible in the 1980s, but that environmental niche has been largely eliminated by huge executables, unchangeable CDs and better operating system safeguards
Virus authors adapted to the changing computing environment by creating the e-mail virus. For example, the Melissa virus in March 1999 was spectacular. Melissa spread in Microsoft Word documents sent via e-mail, and it worked like this
Someone created the virus as a Word document and uploaded it to an Internet newsgroup. Anyone who downloaded the document and opened it would trigger the virus. The virus would then send the document (and therefore itself) in an e-mail message to the first 50 people in the person's address book. The e-mail message contained a friendly note that included the person's name, so the recipient would open the document, thinking it was harmless. The virus would then create 50 new messages from the recipient's machine. At that rate, the Melissa virus quickly became the fastest-spreading virus anyone had seen at the time. As mentioned earlier, it forced a number of large companies to shut down their e-mail systems.

The ILOVEYOU virus, which appeared on May 4, 2000, was even simpler. It contained a piece of code as an attachment. People who double-clicked on the attachment launched the code. It then sent copies of itself to everyone in the victim's address book and started corrupting files on the victim's machine. This is as simple as a virus can get. It is really more of a Trojan horse distributed by e-mail than it is a virus.

The Melissa virus took advantage of the programming language built into Microsoft Word called VBA, or Visual Basic for Applications. It is a complete programming language and it can be programmed to do things like modify files and send e-mail messages. It also has a useful but dangerous auto-execute feature. A programmer can insert a program into a document that runs instantly whenever the document is opened. This is how the Melissa virus was programmed. Anyone who opened a document infected with Melissa would immediately activate the virus. It would send the 50 e-mails, and then infect a central file called NORMAL.DOT so that any file saved later would also contain the virus. It created a huge mess.

Microsoft applications have a feature called Macro Virus Protection built into them to prevent this sort of virus. With Macro Virus Protection turned on (the default option is ON), the auto-execute feature is disabled. So when a document tries to auto-execute viral code, a dialog pops up warning the user. Unfortunately, many people don't know what macros or macro viruses are, and when they see the dialog they ignore it, so the virus runs anyway. Many other people turn off the protection mechanism. So the Melissa virus spread despite the safeguards in place to prevent it.

In the case of the ILOVEYOU virus, the whole thing was human-powered. If a person double-clicked on the program that came as an attachment, then the program ran and did its thing. What fueled this virus was the human willingness to double-click on the executable.

Worms

A worm is a computer program that has the ability to copy itself from machine to machine. Worms use up computer time and network bandwidth when they replicate, and often carry payloads that do considerable damage. A worm called Code Red made huge headlines in 2001. Experts predicted that this worm could clog the Internet so effectively that things would completely grind to a halt.

A worm usually exploits some sort of security hole in a piece of software or the operating system. For example, the Slammer worm (which caused mayhem in January 2003) exploited a hole in Microsoft's SQL server. "Wired" magazine took a fascinating look inside Slammer's tiny (376 byte) program.

Worms normally move around and infect other machines through computer networks. Using a network, a worm can expand from a single copy incredibly quickly. The Code Red worm replicated itself more than 250,000 times in approximately nine hours on July 19, 2001 [Source: Rhodes].

The Code Red worm slowed down Internet traffic when it began to replicate itself, but not nearly as badly as predicted. Each copy of the worm scanned the Internet for Windows NT or Windows 2000 servers that did not have the Microsoft security patch installed. Each time it found an unsecured server, the worm copied itself to that server. The new copy then scanned for other servers to infect. Depending on the number of unsecured servers, a worm could conceivably create hundreds of thousands of copies.

The Code Red worm had instructions to do three things:

* Replicate itself for the first 20 days of each month
* Replace Web pages on infected servers with a page featuring the message "Hacked by Chinese"
* Launch a concerted attack on the White House Web site in an attempt to overwhelm it [Source: eEye Digital Security]

Upon successful infection, Code Red would wait for the appointed hour and connect to the www.whitehouse.gov domain. This attack would consist of the infected systems simultaneously sending 100 connections to port 80 of www.whitehouse.gov (198.137.240.91).

The U.S. government changed the IP address of www.whitehouse.gov to circumvent that particular threat from the worm and issued a general warning about the worm, advising users of Windows NT or Windows 2000 Web servers to make sure they installed the security patch.
A worm called Storm, which showed up in 2007, immediately started making a name for itself. Storm uses social engineering techniques to trick users into loading the worm on their computers. So far, it's working -- experts believe between one million and 50 million computers have been infected [source: Schneier].

When the worm is launched, it opens a back door into the computer, adds the infected machine to a botnet and installs code that hides itself. The botnets are small peer-to-peer groups rather than a larger, more easily identified network. Experts think the people controlling Storm rent out their micro-botnets to deliver spam or adware, or for denial-of-service attacks on Web sites.
How to Protect Your Computer from Viruses
You can protect yourself against viruses with a few simple steps:

* If you are truly worried about traditional (as opposed to e-mail) viruses, you should be running a more secure operating system like UNIX. You never hear about viruses on these operating systems because the security features keep viruses (and unwanted human visitors) away from your hard disk.
* If you are using an unsecured operating system, then buying virus protection software is a nice safeguard.
* If you simply avoid programs from unknown sources (like the Internet), and instead stick with commercial software purchased on CDs, you eliminate almost all of the risk from traditional viruses.
* You should make sure that Macro Virus Protection is enabled in all Microsoft applications, and you should NEVER run macros in a document unless you know what they do. There is seldom a good reason to add macros to a document, so avoiding all macros is a great policy.
* You should never double-click on an e-mail attachment that contains an executable. Attachments that come in as Word files (.DOC), spreadsheets (.XLS), images (.GIF), etc., are data files and they can do no damage (noting the macro virus problem in Word and Excel documents mentioned above). However, some viruses can now come in through .JPG graphic file attachments. A file with an extension like EXE, COM or VBS is an executable, and an executable can do any sort of damage it wants. Once you run it, you have given it permission to do anything on your machine. The only defense is never to run executables that arrive via e-mail

­
­­

Thursday, December 10, 2009

Information about wifi

What is WIFI?


Wireless Fidelity – popularly known as Wi-Fi, developed on IEEE 802.11 standards, is the recent technology advancement in wireless communication. As the name indicates, WI-FI provides wireless access to applications and data across a radio network. WI-FI sets up numerous ways to build up a connection between the transmitter and the receiver such as DSSS, FHSS, IR – Infrared and OFDM. The development on WI-FI technology began in 1997 when the Institute of Electrical and Electronic Engineers (IEEE) introduced the 802.11 technology that carried higher capacities of data across the network. This greatly interested some of major brands across the globe such as the world famous Cisco Systems or 3COM. Initially, the price of Wi-Fi was very high but around in 2002, the IT market witnessed the arrival of a break through product that worked under the new 802.11 g standards. In 2003, IEEE sanctioned the standard and the world saw the creation of affordable Wi-Fi for the masses.

Wi-Fi provides its users with the liberty of connecting to the Internet from any place such as their home, office or a public place without the hassles of plugging in the wires. Wi-Fi is quicker than the conventional modem for accessing information over a large network. With the help of different amplifiers, the users can easily change their location without disruption in their network access. Wi-Fi devices are compliant with each other to grant efficient access of information to the user. Wi-Fi location where the users can connect to the wireless network is called a Wi-Fi hotspot. Through the Wi-Fi hotspot, the users can even enhance their home business as accessing information through Wi-Fi is simple. Accessing a wireless network through a hotspot in some cases is cost-free while in some it may carry additional charges. Many standard Wi-Fi devices such as PCI, miniPCI, USB, Cardbus and PC card, ExpressCard make the Wi-Fi experience convenient and pleasurable for the users. Distance from a wireless network can lessen the signal strength to quite an extent; some devices such as Ermanno Pietrosemoli and EsLaRed of Venezuela Distance are used for amplifying the signal strength of the network. These devices create an embedded system that corresponds with any other node on the Internet.

The market is flooded with various Wi-Fi software tools. Each of these tools is specifically designed for different types of networks, operating systems and usage type. For accessing multiple network platforms, Aircrack-ng is by far the best amongst its counterparts. The preferred Wi-Fi software tools list for Windows users is: KNSGEM II, NetStumbler, OmniPeek, Stumbverter, WiFi Hopper, APTools. Unix users should pick any of the following: Aircrack, Aircrack-ptw, AirSnort, CoWPAtty,Karma . Whereas, Mac users are presented with these options: MacStumble, KisMAC, Kismet. It is imperative for users to pick out a Wi-Fi software tool that is compatible with their computer and its dynamics.
What is a computer Network?

A network is any collection of independent computers that communicate with one another over a shared network medium.A computer network is a collection of two or more connected computers. When these computers are joined in a network, people can share files and peripherals such as modems, printers, tape backup drives, or CD-ROM drives. When networks at multiple locations are connected using services available from phone companies, people can send e-mail, share links to the global Internet, or conduct video conferences in real time with other remote users. As companies rely on applications like electronic mail and database management for core business operations, computer networking becomes increasingly more important.

Every network includes:

* At least two computers Server or Client workstation.
* Networking Interface Card’s (NIC)
* A connection medium, usually a wire or cable, although wireless communication between networked computers and peripherals is also possible.
* Network Operating system software, such as Microsoft Windows NT or 2000, Novell NetWare, Unix and Linux.

Types of Networks:
LANs (Local Area Networks)

A network is any collection of independent computers that communicate with one another over a shared network medium. LANs are networks usually confined to a geographic area, such as a single building or a college campus. LANs can be small, linking as few as three computers, but often link hundreds of computers used by thousands of people. The development of standard networking protocols and media has resulted in worldwide proliferation of LANs throughout business and educational organizations.
WANs (Wide Area Networks)

Wide area networking combines multiple LANs that are geographically separate. This is accomplished by connecting the different LANs using services such as dedicated leased phone lines, dial-up phone lines (both synchronous and asynchronous), satellite links, and data packet carrier services. Wide area networking can be as simple as a modem and remote access server for employees to dial into, or it can be as complex as hundreds of branch offices globally linked using special routing protocols and filters to minimize the expense of sending data sent over vast distances.
Internet

The Internet is a system of linked networks that are worldwide in scope and facilitate data communication services such as remote login, file transfer, electronic mail, the World Wide Web and newsgroups.

With the meteoric rise in demand for connectivity, the Internet has become a communications highway for millions of users. The Internet was initially restricted to military and academic institutions, but now it is a full-fledged conduit for any and all forms of information and commerce. Internet websites now provide personal, educational, political and economic resources to every corner of the planet.
Intranet

With the advancements made in browser-based software for the Internet, many private organizations are implementing intranets. An intranet is a private network utilizing Internet-type tools, but available only within that organization. For large organizations, an intranet provides an easy access mode to corporate information for employees.
MANs (Metropolitan area Networks)

The refers to a network of computers with in a City.
VPN (Virtual Private Network)

VPN uses a technique known as tunneling to transfer data securely on the Internet to a remote access server on your workplace network. Using a VPN helps you save money by using the public Internet instead of making long–distance phone calls to connect securely with your private network. There are two ways to create a VPN connection, by dialing an Internet service provider (ISP), or connecting directly to Internet.
Categories of Network:
Network can be divided in to two main categories:

* Peer-to-peer.
* Server – based.

In peer-to-peer networking there are no dedicated servers or hierarchy among the computers. All of the computers are equal and therefore known as peers. Normally each computer serves as Client/Server and there is no one assigned to be an administrator responsible for the entire network.

Peer-to-peer networks are good choices for needs of small organizations where the users are allocated in the same general area, security is not an issue and the organization and the network will have limited growth within the foreseeable future.

The term Client/server refers to the concept of sharing the work involved in processing data between the client computer and the most powerful server computer.
The client/server network is the most efficient way to provide:

* Databases and management of applications such as Spreadsheets, Accounting, Communications and Document management.
* Network management.
* Centralized file storage.

The client/server model is basically an implementation of distributed or cooperative processing. At the heart of the model is the concept of splitting application functions between a client and a server processor. The division of labor between the different processors enables the application designer to place an application function on the processor that is most appropriate for that function. This lets the software designer optimize the use of processors--providing the greatest possible return on investment for the hardware.

Client/server application design also lets the application provider mask the actual location of application function. The user often does not know where a specific operation is executing. The entire function may execute in either the PC or server, or the function may be split between them. This masking of application function locations enables system implementers to upgrade portions of a system over time with a minimum disruption of application operations, while protecting the investment in existing hardware and software.
The OSI Model:

Open System Interconnection (OSI) reference model has become an International standard and serves as a guide for networking. This model is the best known and most widely used guide to describe networking environments. Vendors design network products based on the specifications of the OSI model. It provides a description of how network hardware and software work together in a layered fashion to make communications possible. It also helps with trouble shooting by providing a frame of reference that describes how components are supposed to function.

There are seven to get familiar with and these are the physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and the application layer.

* Physical Layer, is just that the physical parts of the network such as wires, cables, and there media along with the length. Also this layer takes note of the electrical signals that transmit data throughout system.
* Data Link Layer, this layer is where we actually assign meaning to the electrical signals in the network. The layer also determines the size and format of data sent to printers, and other devices. Also I don’t want to forget that these are also called nodes in the network. Another thing to consider in this layer is will also allow and define the error detection and correction schemes that insure data was sent and received.
* Network Layer, this layer provides the definition for the connection of two dissimilar networks.
* Transport Layer, this layer allows data to be broken into smaller packages for data to be distributed and addressed to other nodes (workstations).
* Session Layer, this layer helps out with the task to carry information from one node (workstation) to another node (workstation). A session has to be made before we can transport information to another computer.
* Presentation Layer, this layer is responsible to code and decode data sent to the node.
* Application Layer, this layer allows you to use an application that will communicate with say the operation system of a server. A good example would be using your web browser to interact with the operating system on a server such as Windows NT, which in turn gets the data you requested.

Network Architectures:
Ethernet

Ethernet is the most popular physical layer LAN technology in use today. Other LAN types include Token Ring, Fast Ethernet, Fiber Distributed Data Interface (FDDI), Asynchronous Transfer Mode (ATM) and LocalTalk. Ethernet is popular because it strikes a good balance between speed, cost and ease of installation. These benefits, combined with wide acceptance in the computer marketplace and the ability to support virtually all popular network protocols, make Ethernet an ideal networking technology for most computer users today. The Institute for Electrical and Electronic Engineers (IEEE) defines the Ethernet standard as IEEE Standard 802.3. This standard defines rules for configuring an Ethernet network as well as specifying how elements in an Ethernet network interact with one another. By adhering to the IEEE standard, network equipment and network protocols can communicate efficiently.
Fast Ethernet

For Ethernet networks that need higher transmission speeds, the Fast Ethernet standard (IEEE 802.3u) has been established. This standard raises the Ethernet speed limit from 10 Megabits per second (Mbps) to 100 Mbps with only minimal changes to the existing cable structure. There are three types of Fast Ethernet: 100BASE-TX for use with level 5 UTP cable, 100BASE-FX for use with fiber-optic cable, and 100BASE-T4 which utilizes an extra two wires for use with level 3 UTP cable. The 100BASE-TX standard has become the most popular due to its close compatibility with the 10BASE-T Ethernet standard. For the network manager, the incorporation of Fast Ethernet into an existing configuration presents a host of decisions. Managers must determine the number of users in each site on the network that need the higher throughput, decide which segments of the backbone need to be reconfigured specifically for 100BASE-T and then choose the necessary hardware to connect the 100BASE-T segments with existing 10BASE-T segments. Gigabit Ethernet is a future technology that promises a migration path beyond Fast Ethernet so the next generation of networks will support even higher data transfer speeds.
Token Ring

Token Ring is another form of network configuration which differs from Ethernet in that all messages are transferred in a unidirectional manner along the ring at all times. Data is transmitted in tokens, which are passed along the ring and viewed by each device. When a device sees a message addressed to it, that device copies the message and then marks that message as being read. As the message makes its way along the ring, it eventually gets back to the sender who now notes that the message was received by the intended device. The sender can then remove the message and free that token for use by others.

Various PC vendors have been proponents of Token Ring networks at different times and thus these types of networks have been implemented in many organizations.
FDDI

FDDI (Fiber-Distributed Data Interface) is a standard for data transmission on fiber optic lines in a local area network that can extend in range up to 200 km (124 miles). The FDDI protocol is based on the token ring protocol. In addition to being large geographically, an FDDI local area network can support thousands of users.
Protocols:

Network protocols are standards that allow computers to communicate. A protocol defines how computers identify one another on a network, the form that the data should take in transit, and how this information is processed once it reaches its final destination. Protocols also define procedures for handling lost or damaged transmissions or "packets." TCP/IP (for UNIX, Windows NT, Windows 95 and other platforms), IPX (for Novell NetWare), DECnet (for networking Digital Equipment Corp. computers), AppleTalk (for Macintosh computers), and NetBIOS/NetBEUI (for LAN Manager and Windows NT networks) are the main types of network protocols in use today.

Although each network protocol is different, they all share the same physical cabling. This common method of accessing the physical network allows multiple protocols to peacefully coexist over the network media, and allows the builder of a network to use common hardware for a variety of protocols. This concept is known as "protocol independence,"

Some Important Protocols and their job:
Protocol Acronym Its Job
Point-To-Point TCP/IP The backbone protocol of the internet. Popular also for intranets using the internet
Transmission Control Protocol/internet Protocol TCP/IP The backbone protocol of the internet. Popular also for intranets using the internet
Internetwork Package Exchange/Sequenced Packet Exchange IPX/SPX This is a standard protocol for Novell Network Operating System
NetBIOS Extended User Interface NetBEUI This is a Microsoft protocol that doesn’t support routing to other networks
File Transfer Protocol FTP Used to send and receive files from a remote host
Hyper Transfer Protocol HTTP Used for the web to send documents that are encoded in HTML.
Network File Services NFS Allows network nodes or workstations to access files and drives as if they were their own.
Simple Mail Transfer Protocol SMTP Used to send Email over a network
Telnet Used to connect to a host and emulate a terminal that the remote server can recognize
Introduction to TCP/IP Networks:

TCP/IP-based networks play an increasingly important role in computer networks. Perhaps one reason for their appeal is that they are based on an open specification that is not controlled by any vendor.
What Is TCP/IP?

TCP stands for Transmission Control Protocol and IP stands for Internet Protocol. The term TCP/IP is not limited just to these two protocols, however. Frequently, the term TCP/IP is used to refer to a group of protocols related to the TCP and IP protocols such as the User Datagram Protocol (UDP), File Transfer Protocol (FTP), Terminal Emulation Protocol (TELNET), and so on.
The Origins of TCP/IP

In the late 1960s, DARPA (the Defense Advanced Research Project Agency), in the United States, noticed that there was a rapid proliferation of computers in military communications. Computers, because they can be easily programmed, provide flexibility in achieving network functions that is not available with other types of communications equipment. The computers then used in military communications were manufactured by different vendors and were designed to interoperate with computers from that vendor only. Vendors used proprietary protocols in their communications equipment. The military had a multi vendor network but no common protocol to support the heterogeneous equipment from different vendors
Net work Cables and Stuff:

In the network you will commonly find three types of cables used these are the, coaxial cable, fiber optic and twisted pair.
Thick Coaxial Cable

This type cable is usually yellow in color and used in what is called thicknets, and has two conductors. This coax can be used in 500-meter lengths. The cable itself is made up of a solid center wire with a braided metal shield and plastic sheathing protecting the rest of the wire.
Thin Coaxial Cable

As with the thick coaxial cable is used in thicknets the thin version is used in thinnets. This type cable is also used called or referred to as RG-58. The cable is really just a cheaper version of the thick cable.
Fiber Optic Cable

As we all know fiber optics are pretty darn cool and not cheap. This cable is smaller and can carry a vast amount of information fast and over long distances.
Twisted Pair Cables

These come in two flavors of unshielded and shielded.
Unshielded Twisted Pair (UTP)

This is the most popular form of cables in the network and the cheapest form that you can go with. The UTP has four pairs of wires and all inside plastic sheathing. The biggest reason that we call it Twisted Pair is to protect the wires from interference from themselves. Each wire is only protected with a thin plastic sheath.
Shielded Twisted Pair (STP)

Is more common in high-speed networks. The biggest difference you will see in the UTP and STP is that the STP use’s metallic shield wrapping to protect the wire from interference.

-Something else to note about these cables is that they are defined in numbers also. The bigger the number the better the protection from interference. Most networks should go with no less than a CAT 3 and CAT 5 is most recommended.

-Now you know about cables we need to know about connectors. This is pretty important and you will most likely need the RJ-45 connector. This is the cousin of the phone jack connector and looks real similar with the exception that the RJ-45 is bigger. Most commonly your connector are in two flavors and this is BNC (Bayonet Naur Connector) used in thicknets and the RJ-45 used in smaller networks using UTP/STP.
Ethernet Cabling

Now to familiarize you with more on the Ethernet and it’s cabling we need to look at the 10’s. 10Base2, is considered the thin Ethernet, thinnet, and thinwire which uses light coaxial cable to create a 10 Mbps network. The cable segments in this network can’t be over 185 meters in length. These cables connect with the BNC connector. Also as a note these unused connection must have a terminator, which will be a 50-ohm terminator.

10Base5, this is considered a thicknet and is used with coaxial cable arrangement such as the BNC connector. The good side to the coaxial cable is the high-speed transfer and cable segments can be up to 500 meters between nodes/workstations. You will typically see the same speed as the 10Base2 but larger cable lengths for more versatility.

10BaseT, the “T” stands for twisted as in UTP (Unshielded Twisted Pair) and uses this for 10Mbps of transfer. The down side to this is you can only have cable lengths of 100 meters between nodes/workstations. The good side to this network is they are easy to set up and cheap! This is why they are so common an ideal for small offices or homes.

100BaseT, is considered Fast Ethernet uses STP (Shielded Twisted Pair) reaching data transfer of 100Mbps. This system is a little more expensive but still remains popular as the 10BaseT and cheaper than most other type networks. This on of course would be the cheap fast version.

10BaseF, this little guy has the advantage of fiber optics and the F stands for just that. This arrangement is a little more complicated and uses special connectors and NIC’s along with hubs to create its network. Pretty darn neat and not to cheap on the wallet.

An important part of designing and installing an Ethernet is selecting the appropriate Ethernet medium. There are four major types of media in use today: Thickwire for 10BASE5 networks, thin coax for 10BASE2 networks, unshielded twisted pair (UTP) for 10BASE-T networks and fiber optic for 10BASE-FL or Fiber-Optic Inter-Repeater Link (FOIRL) networks. This wide variety of media reflects the evolution of Ethernet and also points to the technology's flexibility. Thickwire was one of the first cabling systems used in Ethernet but was expensive and difficult to use. This evolved to thin coax, which is easier to work with and less expensive.
Network Topologies:
What is a Network topology?

A network topology is the geometric arrangement of nodes and cable links in a LAN,

There are three topology’s to think about when you get into networks. These are the star, rind, and the bus.

Star, a ring topology features a logically closed loop. Data packets travel in a single direction around the ring from one network device to the next. Each network device acts as a repeater, meaning it regenerates the signal

Ring, in a star topology each node has a dedicated set of wires connecting it to a central network hub. Since all traffic passes through the hub, the hub becomes a central point for isolating network problems and gathering network statistics.

Bus, the bus topology, each node (computer, server, peripheral etc.) attaches directly to a common cable. This topology most often serves as the backbone for a network. In some instances, such as in classrooms or labs, a bus will connect small workgroups
Collisions:

Ethernet is a shared media, so there are rules for sending packets of data to avoid conflicts and protect data integrity. Nodes determine when the network is available for sending packets. It is possible that two nodes at different locations attempt to send data at the same time. When both PCs are transferring a packet to the network at the same time, a collision will result.

Minimizing collisions is a crucial element in the design and operation of networks. Increased collisions are often the result of too many users on the network, which results in a lot of contention for network bandwidth. This can slow the performance of the network from the user's point of view. Segmenting the network, where a network is divided into different pieces joined together logically with a bridge or switch, is one way of reducing an overcrowded network.
Ethernet Products:

The standards and technology that have just been discussed help define the specific products that network managers use to build Ethernet networks. The following text discusses the key products needed to build an Ethernet LAN.
Transceivers

Transceivers are used to connect nodes to the various Ethernet media. Most computers and network interface cards contain a built-in 10BASE-T or 10BASE2 transceiver, allowing them to be connected directly to Ethernet without requiring an external transceiver. Many Ethernet devices provide an AUI connector to allow the user to connect to any media type via an external transceiver. The AUI connector consists of a 15-pin D-shell type connector, female on the computer side, male on the transceiver side. Thickwire (10BASE5) cables also use transceivers to allow connections.

For Fast Ethernet networks, a new interface called the MII (Media Independent Interface) was developed to offer a flexible way to support 100 Mbps connections. The MII is a popular way to connect 100BASE-FX links to copper-based Fast Ethernet devices.
Network Interface Cards:

Network interface cards, commonly referred to as NICs, and are used to connect a PC to a network. The NIC provides a physical connection between the networking cable and the computer's internal bus. Different computers have different bus architectures; PCI bus master slots are most commonly found on 486/Pentium PCs and ISA expansion slots are commonly found on 386 and older PCs. NICs come in three basic varieties: 8-bit, 16-bit, and 32-bit. The larger the number of bits that can be transferred to the NIC, the faster the NIC can transfer data to the network cable.

Many NIC adapters comply with Plug-n-Play specifications. On these systems, NICs are automatically configured without user intervention, while on non-Plug-n-Play systems, configuration is done manually through a setup program and/or DIP switches.

Cards are available to support almost all networking standards, including the latest Fast Ethernet environment. Fast Ethernet NICs are often 10/100 capable, and will automatically set to the appropriate speed. Full duplex networking is another option, where a dedicated connection to a switch allows a NIC to operate at twice the speed.
Hubs/Repeaters:

Hubs/repeaters are used to connect together two or more Ethernet segments of any media type. In larger designs, signal quality begins to deteriorate as segments exceed their maximum length. Hubs provide the signal amplification required to allow a segment to be extended a greater distance. A hub takes any incoming signal and repeats it out all ports.

Ethernet hubs are necessary in star topologies such as 10BASE-T. A multi-port twisted pair hub allows several point-to-point segments to be joined into one network. One end of the point-to-point link is attached to the hub and the other is attached to the computer. If the hub is attached to a backbone, then all computers at the end of the twisted pair segments can communicate with all the hosts on the backbone. The number and type of hubs in any one-collision domain is limited by the Ethernet rules. These repeater rules are discussed in more detail later.
Network Type Max Nodes
Per Segment Max Distance
Per Segment
10BASE-T
10BASE2
10BASE5
10BASE-FL 2
30
100
2 100m
185m
500m
2000m
Adding Speed:

While repeaters allow LANs to extend beyond normal distance limitations, they still limit the number of nodes that can be supported. Bridges and switches, however, allow LANs to grow significantly larger by virtue of their ability to support full Ethernet segments on each port. Additionally, bridges and switches selectively filter network traffic to only those packets needed on each segment - this significantly increases throughput on each segment and on the overall network. By providing better performance and more flexibility for network topologies, bridges and switches will continue to gain popularity among network managers.
Bridges:

The function of a bridge is to connect separate networks together. Bridges connect different networks types (such as Ethernet and Fast Ethernet) or networks of the same type. Bridges map the Ethernet addresses of the nodes residing on each network segment and allow only necessary traffic to pass through the bridge. When a packet is received by the bridge, the bridge determines the destination and source segments. If the segments are the same, the packet is dropped ("filtered"); if the segments are different, then the packet is "forwarded" to the correct segment. Additionally, bridges do not forward bad or misaligned packets.

Bridges are also called "store-and-forward" devices because they look at the whole Ethernet packet before making filtering or forwarding decisions. Filtering packets, and regenerating forwarded packets enable bridging technology to split a network into separate collision domains. This allows for greater distances and more repeaters to be used in the total network design.
Ethernet Switches:

Ethernet switches are an expansion of the concept in Ethernet bridging. LAN switches can link four, six, ten or more networks together, and have two basic architectures: cut-through and store-and-forward. In the past, cut-through switches were faster because they examined the packet destination address only before forwarding it on to its destination segment. A store-and-forward switch, on the other hand, accepts and analyzes the entire packet before forwarding it to its destination.

It takes more time to examine the entire packet, but it allows the switch to catch certain packet errors and keep them from propagating through the network. Both cut-through and store-and-forward switches separate a network into collision domains, allowing network design rules to be extended. Each of the segments attached to an Ethernet switch has a full 10 Mbps of bandwidth shared by fewer users, which results in better performance (as opposed to hubs that only allow bandwidth sharing from a single Ethernet). Newer switches today offer high-speed links, FDDI, Fast Ethernet or ATM. These are used to link switches together or give added bandwidth to high-traffic servers. A network composed of a number of switches linked together via uplinks is termed a "collapsed backbone" network.
Routers:

Routers filter out network traffic by specific protocol rather than by packet address. Routers also divide networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments. Network speed often decreases due to this type of intelligent forwarding. Such filtering takes more time than that exercised in a switch or bridge, which only looks at the Ethernet address. However, in more complex networks, overall efficiency is improved by using routers.
What is a network firewall?

A firewall is a system or group of systems that enforces an access control policy between two networks. The actual means by which this is accomplished varies widely, but in principle, the firewall can be thought of as a pair of mechanisms: one which exists to block traffic, and the other which exists to permit traffic. Some firewalls place a greater emphasis on blocking traffic, while others emphasize permitting traffic. Probably the most important thing to recognize about a firewall is that it implements an access control policy. If you don't have a good idea of what kind of access you want to allow or to deny, a firewall really won't help you. It's also important to recognize that the firewall's configuration, because it is a mechanism for enforcing policy, imposes its policy on everything behind it. Administrators for firewalls managing the connectivity for a large number of hosts therefore have a heavy responsibility.
Network Design Criteria:

Ethernets and Fast Ethernets have design rules that must be followed in order to function correctly. Maximum number of nodes, number of repeaters and maximum segment distances are defined by the electrical and mechanical design properties of each type of Ethernet and Fast Ethernet media.

A network using repeaters, for instance, functions with the timing constraints of Ethernet. Although electrical signals on the Ethernet media travel near the speed of light, it still takes a finite time for the signal to travel from one end of a large Ethernet to another. The Ethernet standard assumes it will take roughly 50 microseconds for a signal to reach its destination.

Ethernet is subject to the "5-4-3" rule of repeater placement: the network can only have five segments connected; it can only use four repeaters; and of the five segments, only three can have users attached to them; the other two must be inter-repeater links.

If the design of the network violates these repeater and placement rules, then timing guidelines will not be met and the sending station will resend that packet. This can lead to lost packets and excessive resent packets, which can slow network performance and create trouble for applications. Fast Ethernet has modified repeater rules, since the minimum packet size takes less time to transmit than regular Ethernet. The length of the network links allows for a fewer number of repeaters. In Fast Ethernet networks, there are two classes of repeaters. Class I repeaters have a latency of 0.7 microseconds or less and are limited to one repeater per network. Class II repeaters have a latency of 0.46 microseconds or less and are limited to two repeaters per network. The following are the distance (diameter) characteristics for these types of Fast Ethernet repeater combinations:
Fast Ethernet Copper Fiber
No Repeaters
One Class I Repeater
One Class II Repeater
Two Class II Repeaters 100m
200m
200m
205m 412m*
272m
272m
228m
* Full Duplex Mode 2 km



When conditions require greater distances or an increase in the number of nodes/repeaters, then a bridge, router or switch can be used to connect multiple networks together. These devices join two or more separate networks, allowing network design criteria to be restored. Switches allow network designers to build large networks that function well. The reduction in costs of bridges and switches reduces the impact of repeater rules on network design.

Each network connected via one of these devices is referred to as a separate collision domain in the overall network.
Types of Servers:
Device Servers

A device server is defined as a specialized, network-based hardware device designed to perform a single or specialized set of server functions. It is characterized by a minimal operating architecture that requires no per seat network operating system license, and client access that is independent of any operating system or proprietary protocol. In addition the device server is a "closed box," delivering extreme ease of installation, minimal maintenance, and can be managed by the client remotely via a Web browser.

Print servers, terminal servers, remote access servers and network time servers are examples of device servers which are specialized for particular functions. Each of these types of servers has unique configuration attributes in hardware or software that help them to perform best in their particular arena.
Print Servers

Print servers allow printers to be shared by other users on the network. Supporting either parallel and/or serial interfaces, a print server accepts print jobs from any person on the network using supported protocols and manages those jobs on each appropriate printer.

Print servers generally do not contain a large amount of memory; printers simply store information in a queue. When the desired printer becomes available, they allow the host to transmit the data to the appropriate printer port on the server. The print server can then simply queue and print each job in the order in which print requests are received, regardless of protocol used or the size of the job.
Multiport Device Servers

Devices that are attached to a network through a multiport device server can be shared between terminals and hosts at both the local site and throughout the network. A single terminal may be connected to several hosts at the same time (in multiple concurrent sessions), and can switch between them. Multiport device servers are also used to network devices that have only serial outputs. A connection between serial ports on different servers is opened, allowing data to move between the two devices.

Given its natural translation ability, a multi-protocol multiport device server can perform conversions between the protocols it knows, like LAT and TCP/IP. While server bandwidth is not adequate for large file transfers, it can easily handle host-to-host inquiry/response applications, electronic mailbox checking, etc. And it is far more economical than the alternatives of acquiring expensive host software and special-purpose converters. Multiport device and print servers give their users greater flexibility in configuring and managing their networks.

Whether it is moving printers and other peripherals from one network to another, expanding the dimensions of interoperability or preparing for growth, multiport device servers can fulfill your needs, all without major rewiring.
Access Servers

While Ethernet is limited to a geographic area, remote users such as traveling sales people need access to network-based resources. Remote LAN access, or remote access, is a popular way to provide this connectivity. Access servers use telephone services to link a user or office with an office network. Dial-up remote access solutions such as ISDN or asynchronous dial introduce more flexibility. Dial-up remote access offers both the remote office and the remote user the economy and flexibility of "pay as you go" telephone services. ISDN is a special telephone service that offers three channels, two 64 Kbps "B" channels for user data and a "D" channel for setting up the connection. With ISDN, the B channels can be combined for double bandwidth or separated for different applications or users. With asynchronous remote access, regular telephone lines are combined with modems and remote access servers to allow users and networks to dial anywhere in the world and have data access. Remote access servers provide connection points for both dial-in and dial-out applications on the network to which they are attached. These hybrid devices route and filter protocols and offer other services such as modem pooling and terminal/printer services. For the remote PC user, one can connect from any available telephone jack (RJ45), including those in a hotel rooms or on most airplanes.
Network Time Servers

A network time server is a server specialized in the handling of timing information from sources such as satellites or radio broadcasts and is capable of providing this timing data to its attached network. Specialized protocols such as NTP or udp/time allow a time server to communicate to other network nodes ensuring that activities that must be coordinated according to their time of execution are synchronized correctly. GPS satellites are one source of information that can allow global installations to achieve constant timing.
IP Addressing:

An IP (Internet Protocol) address is a unique identifier for a node or host connection on an IP network. An IP address is a 32 bit binary number usually represented as 4 decimal values, each representing 8 bits, in the range 0 to 255 (known as octets) separated by decimal points. This is known as "dotted decimal" notation.

Example: 140.179.220.200

It is sometimes useful to view the values in their binary form.

140 .179 .220 .200

10001100.10110011.11011100.11001000

Every IP address consists of two parts, one identifying the network and one identifying the node. The Class of the address and the subnet mask determine which part belongs to the network address and which part belongs to the node address.
Address Classes:

There are 5 different address classes. You can determine which class any IP address is in by examining the first 4 bits of the IP address.

Class A addresses begin with 0xxx, or 1 to 126 decimal.

Class B addresses begin with 10xx, or 128 to 191 decimal.

Class C addresses begin with 110x, or 192 to 223 decimal.

Class D addresses begin with 1110, or 224 to 239 decimal.

Class E addresses begin with 1111, or 240 to 254 decimal.

Addresses beginning with 01111111, or 127 decimal, are reserved for loopback and for internal testing on a local machine. [You can test this: you should always be able to ping 127.0.0.1, which points to yourself] Class D addresses are reserved for multicasting. Class E addresses are reserved for future use. They should not be used for host addresses.

Now we can see how the Class determines, by default, which part of the IP address belongs to the network (N) and which part belongs to the node (n).

Class A -- NNNNNNNN.nnnnnnnn.nnnnnnn.nnnnnnn

Class B -- NNNNNNNN.NNNNNNNN.nnnnnnnn.nnnnnnnn

Class C -- NNNNNNNN.NNNNNNNN.NNNNNNNN.nnnnnnnn

In the example, 140.179.220.200 is a Class B address so by default the Network part of the address (also known as the Network Address) is defined by the first two octets (140.179.x.x) and the node part is defined by the last 2 octets (x.x.220.200).

In order to specify the network address for a given IP address, the node section is set to all "0"s. In our example, 140.179.0.0 specifies the network address for 140.179.220.200. When the node section is set to all "1"s, it specifies a broadcast that is sent to all hosts on the network. 140.179.255.255 specifies the example broadcast address. Note that this is true regardless of the length of the node section.
Private Subnets:

There are three IP network addresses reserved for private networks. The addresses are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. They can be used by anyone setting up internal IP networks, such as a lab or home LAN behind a NAT or proxy server or a router. It is always safe to use these because routers on the Internet will never forward packets coming from these addresses

Subnetting an IP Network can be done for a variety of reasons, including organization, use of different physical media (such as Ethernet, FDDI, WAN, etc.), preservation of address space, and security. The most common reason is to control network traffic. In an Ethernet network, all nodes on a segment see all the packets transmitted by all the other nodes on that segment. Performance can be adversely affected under heavy traffic loads, due to collisions and the resulting retransmissions. A router is used to connect IP networks to minimize the amount of traffic each segment must receive.
Subnet Masking

Applying a subnet mask to an IP address allows you to identify the network and node parts of the address. The network bits are represented by the 1s in the mask, and the node bits are represented by the 0s. Performing a bitwise logical AND operation between the IP address and the subnet mask results in the Network Address or Number.

For example, using our test IP address and the default Class B subnet mask, we get:

10001100.10110011.11110000.11001000 140.179.240.200 Class B IP Address

11111111.11111111.00000000.00000000 255.255.000.000 Default Class B Subnet Mask

10001100.10110011.00000000.00000000 140.179.000.000 Network Address
Default subnet masks:

Class A - 255.0.0.0 - 11111111.00000000.00000000.00000000

Class B - 255.255.0.0 - 11111111.11111111.00000000.00000000

Class C - 255.255.255.0 - 11111111.11111111.11111111.00000000

CIDR -- Classless InterDomain Routing.

CIDR was invented several years ago to keep the internet from running out of IP addresses. The "classful" system of allocating IP addresses can be very wasteful; anyone who could reasonably show a need for more that 254 host addresses was given a Class B address block of 65533 host addresses. Even more wasteful were companies and organizations that were allocated Class A address blocks, which contain over 16 Million host addresses! Only a tiny percentage of the allocated Class A and Class B address space has ever been actually assigned to a host computer on the Internet.

People realized that addresses could be conserved if the class system was eliminated. By accurately allocating only the amount of address space that was actually needed, the address space crisis could be avoided for many years. This was first proposed in 1992 as a scheme called Supernetting.

The use of a CIDR notated address is the same as for a Classful address. Classful addresses can easily be written in CIDR notation (Class A = /8, Class B = /16, and Class C = /24)

It is currently almost impossible for an individual or company to be allocated their own IP address blocks. You will simply be told to get them from your ISP. The reason for this is the ever-growing size of the internet routing table. Just 5 years ago, there were less than 5000 network routes in the entire Internet. Today, there are over 90,000. Using CIDR, the biggest ISPs are allocated large chunks of address space (usually with a subnet mask of /19 or even smaller); the ISP's customers (often other, smaller ISPs) are then allocated networks from the big ISP's pool. That way, all the big ISP's customers (and their customers, and so on) are accessible via 1 network route on the Internet.

It is expected that CIDR will keep the Internet happily in IP addresses for the next few years at least. After that, IPv6, with 128 bit addresses, will be needed. Under IPv6, even sloppy address allocation would comfortably allow a billion unique IP addresses for every person on earth
Examining your network with commands:

Ping

PING is used to check for a response from another computer on the network. It can tell you a great deal of information about the status of the network and the computers you are communicating with.

Ping returns different responses depending on the computer in question. The responses are similar depending on the options used.

Ping uses IP to request a response from the host. It does not use TCP

.It takes its name from a submarine sonar search - you send a short sound burst and listen for an echo - a ping - coming back.

In an IP network, `ping' sends a short data burst - a single packet - and listens for a single packet in reply. Since this tests the most basic function of an IP network (delivery of single packet), it's easy to see how you can learn a lot from some `pings'.

To stop ping, type control-c. This terminates the program and prints out a nice summary of the number of packets transmitted, the number received, and the percentage of packets lost, plus the minimum, average, and maximum round-trip times of the packets.

Sample ping session

PING localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=4 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=5 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=6 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=7 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=8 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=9 ttl=255 time=2 ms

localhost ping statistics

10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max = 2/2/2 ms
meikro$

The Time To Live (TTL) field can be interesting. The main purpose of this is so that a packet doesn't live forever on the network and will eventually die when it is deemed "lost." But for us, it provides additional information. We can use the TTL to determine approximately how many router hops the packet has gone through. In this case it's 255 minus N hops, where N is the TTL of the returning Echo Replies. If the TTL field varies in successive pings, it could indicate that the successive reply packets are going via different routes, which isn't a great thing.

The time field is an indication of the round-trip time to get a packet to the remote host. The reply is measured in milliseconds. In general, it's best if round-trip times are under 200 milliseconds. The time it takes a packet to reach its destination is called latency. If you see a large variance in the round-trip times (which is called "jitter"), you are going to see poor performance talking to the host
NSLOOKUP

NSLOOKUP is an application that facilitates looking up hostnames on the network. It can reveal the IP address of a host or, using the IP address, return the host name.

It is very important when troubleshooting problems on a network that you can verify the components of the networking process. Nslookup allows this by revealing details within the infrastructure.
NETSTAT

NETSTAT is used to look up the various active connections within a computer. It is helpful to understand what computers or networks you are connected to. This allows you to further investigate problems. One host may be responding well but another may be less responsive.
IPconfig

This is a Microsoft windows NT, 2000 command. It is very useful in determining what could be wrong with a network.

This command when used with the /all switch, reveal enormous amounts of troubleshooting information within the system.

Windows 2000 IP Configuration

Host Name . . . . . . . . . . . . : cowder
Primary DNS Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Broadcast
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . :
WAN (PPP/SLIP) Interface
Physical Address. . . . . . . . . : 00-53-45-00-00-00
DHCP Enabled. . . . . . . . . . . : No
IP Address. . . . . . . . . . . . : 12.90.108.123
Subnet Mask . . . . . . . . . . . : 255.255.255.255
Default Gateway . . . . . . . . . : 12.90.108.125
DNS Servers . . . . . . . . . . . : 12.102.244.2
204.127.129.2
Traceroute

Traceroute on Unix and Linux (or tracert in the Microsoft world) attempts to trace the current network path to a destination. Here is an example of a traceroute run to www.berkeley.edu:

$ traceroute www.berkeley.edu

traceroute to amber.Berkeley.EDU (128.32.25.12), 30 hops max, 40 byte packets

1 sf1-e3.wired.net (206.221.193.1) 3.135 ms 3.021 ms 3.616 ms

2 sf0-e2s2.wired.net (205.227.206.33) 1.829 ms 3.886 ms 2.772 ms

3 paloalto-cr10.bbnplanet.net (131.119.26.105) 5.327 ms 4.597 ms 5.729 ms

4 paloalto-br1.bbnplanet.net (131.119.0.193) 4.842 ms 4.615 ms 3.425 ms

5 sl-sj-2.sprintlink.net (4.0.1.66) 7.488 ms 38.804 ms 7.708 ms

6 144.232.8.81 (144.232.8.81) 6.560 ms 6.631 ms 6.565 ms

7 144.232.4.97 (144.232.4.97) 7.638 ms 7.948 ms 8.129 ms

8 144.228.146.50 (144.228.146.50) 9.504 ms 12.684 ms 16.648 ms

9 f5-0.inr-666-eva.berkeley.edu (198.128.16.21) 9.762 ms 10.611 ms 10.403 ms

10 f0-0.inr-107-eva.Berkeley.EDU (128.32.2.1) 11.478 ms 10.868 ms 9.367 ms

11 f8-0.inr-100-eva.Berkeley.EDU (128.32.235.100) 10.738 ms 11.693 ms 12.520 ms

Who Created the Internet Network?

No single person or organization created the modern Internet, including Al Gore, Lyndon Johnson, or any other individual. Instead, multiple people developed the key technologies that later grew to become the Internet:
Email - Long before the World Wide Web, email was the dominant communication method on the Internet. Ray Tomlinson developed in 1971 the first email system that worked over the early Internet.


•Ethernet - The physical communication technology underlying the Internet, Ethernet was created by Robert Metcalfe and David Boggs in 1973.


•TCP/IP
- In May, 1974, the Institute of Electrical and Electronic Engineers (IEEE) published a paper titled "A Protocol for Packet Network Interconnection." The paper's authors - Vinton Cerf and Robert Kahn - described a protocol called TCP that incorporated both connection-oriented and datagram services. This protocol later became known as TCP/IP.

About network

In information technology, a network is a series of points or nodes interconnected by communication paths. Networks can interconnect with other networks and contain subnetworks.
The most common topology or general configurations of networks include the bus, star, Token Ring, and mesh topologies. Networks can also be characterized in terms of spatial distance as local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs).

A given network can also be characterized by the type of data transmission technology in use on it (for example, a TCP/IP or Systems Network Architecture network); by whether it carries voice, data, or both kinds of signals; by who can use the network (public or private); by the usual nature of its connections (dial-up or switched, dedicated or nonswitched, or virtual connections); and by the types of physical links (for example, optical fiber, coaxial cable, and Unshielded Twisted Pair). Large telephone networks and networks using their infrastructure (such as the Internet) have sharing and exchange arrangements with other companies so that larger networks are created.
 
Free Hit Counter