NETWORK TOPOLOGIES

SEE THE VIDEO AT

Network topology

Network topology is the arrangement of the various elements (links, nodes, etc.) of a computer network.[1][2] Essentially, it is the topological[3] structure of a network and may be depicted physically or logically. Physical topology is the placement of the various components of a network, including device location and cable installation, whilelogical topology illustrates how data flows within a network, regardless of its physical design. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two networks, yet their topologies may be identical.

An example is a local area network (LAN): Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. Conversely, mapping the data flow between the components determines the logical topology of the network.

Topology[edit]

There are two basic categories of network topologies:[4] physical topologies and logical topologies.

The cabling layout used to link devices is the physical topology of the network. This refers to the layout of cabling, the locations of nodes, and the interconnections between the nodes and the cabling.[1] The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunications circuits.

The logical topology in contrast, is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network’s logical topology is not necessarily the same as its physical topology. For example, the originaltwisted pair Ethernet using repeater hubs was a logical bus topology with a physical star topology layout. Token Ring is a logical ring topology, but is wired as a physical star from the Media Access Unit.

The logical classification of network topologies generally follows the same classifications as those in the physical classifications of network topologies but describes the path that the data takes between nodes being used as opposed to the actual physical connections between nodes. The logical topologies are generally determined by network protocols as opposed to being determined by the physical layout of cables, wires, and network devices or by the flow of the electrical signals, although in many cases the paths that the electrical signals take between nodes may closely match the logical flow of data, hence the convention of using the terms logical topology andsignal topology interchangeably.

Logical topologies are often closely associated with Media Access Control methods and protocols. Logical topologies are able to be dynamically reconfigured by special types of equipment such as routers and switches.

Diagram of different network topologies.

The study of network topology recognizes eight basic topologies:[5] point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain.

Point-to-point[edit]

The simplest topology with a dedicated link between two endpoints. Switched point-to-point topologies are the basic model of conventional telephony. The value of a permanent point-to-point network is unimpeded communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed as Metcalfe’s Law.

Permanent (dedicated)

Easiest to understand, of the variations of point-to-point topology, is a point-to-point communications channel that appears, to the user, to be permanently associated with the two endpoints. A children’s tin can telephone is one example of a physical dedicated channel.
Within many switched telecommunications systems, it is possible to establish a permanent circuit. One example might be a telephone in the lobby of a public building, which is programmed to ring only the number of a telephone dispatcher. “Nailing down” a switched connection saves the cost of running a physical circuit between the two points. The resources in such a connection can be released when no longer needed, for example, a television circuit from a parade route back to the studio.
Switched:

Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. This is the basic mode of conventional telephony.

Bus[edit]

Main article: Bus network

Bus network topology

In local area networks where bus topology is used, each node is connected to a single cable, by the help of interface connectors.This central cable is the backbone of the network and is known as the bus (thus the name). A signal from the source travels in both directions to all machines connected on the bus cable until it finds the intended recipient. If the machine address does not match the intended address for the data, the machine ignores the data. Alternatively, if the data matches the machine address, the data is accepted. Because the bus topology consists of only one wire, it is rather inexpensive to implement when compared to other topologies. However, the low cost of implementing the technology is offset by the high cost of managing the network. Additionally, because only one cable is utilized, it can be the single point of failure.
Linear bus
The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has exactly two endpoints (this is the ‘bus’, which is also commonly referred to as the backbone, or trunk) – all data that is transmitted between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network simultaneously.[1]
Note: When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. As a solution, the two endpoints of the bus are normally terminated with a device called a terminator that prevents this reflection.
Distributed bus
The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has more than two endpoints that are created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology (i.e., all nodes share a common transmission medium).

Star[edit]

Main article: Star network

Star network topology

In local area networks with a star topology, each network host is connected to a central hub with a point-to-point connection. So it can be said that every computer is indirectly connected to every other node with the help of the hub. In Star topology every node (computer workstation or any other peripheral) is connected to a central node called hub, router or switch. The switch is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the nodes on the network must be connected to one central device. All traffic that traverses the network passes through the central hub. The hub acts as a signal repeater. The star topology is considered the easiest topology to design and implement. An advantage of the star topology is the simplicity of adding additional nodes. The primary disadvantage of the star topology is that the hub represents a single point of failure.
Extended star

A type of network topology in which a network that is based upon the physical star topology has one or more repeaters between the central node and the peripheral or ‘spoke’ nodes, the repeaters being used to extend the maximum transmission distance of the point-to-point links between the central node and the peripheral nodes beyond that which is supported by the transmitter power of the central node or beyond that which is supported by the standard upon which the physical layer of the physical star network is based.
If the repeaters in a network that is based upon the physical extended star topology are replaced with hubs or switches, then a hybrid network topology is created that is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies.
Distributed Star

A type of network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., ‘daisy-chained’ – with no central or top level connection point (e.g., two or more ‘stacked’ hubs, along with their associated star connected nodes or ‘spokes’).

Ring[edit]

Main article: Ring network

Ring network topology

A network topology is set up in a circular fashion in such a way that they make a closed loop. This way data travels around the ring in one direction and each device on the ring acts as a repeater to keep the signal strong as it travels. Each device incorporates a receiver for the incoming signal and a transmitter to send the data on to the next device in the ring. The network is dependent on the ability of the signal to travel around the ring. When a device sends data, it must travel through each device on the ring until it reaches its destination. Every node is a critical link.[4] In a ring topology, there is no server computer present; all nodes work as a server and repeat the signal. The disadvantage of this topology is that if one node stops working, the entire network is affected or stops working.

Mesh[edit]

Main article: Mesh networking

The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed’s Law.

Fully connected network

Fully connected mesh topology

A fully connected network is a communication network in which each of the nodes is connected to each other. In graph theory it known as a complete graph. A fully connected network doesn’t need to use switching or broadcasting. However, its major disadvantage is that the number of connections grows quadratically with the number of nodes, as per the formula
c= \frac{n(n-1)}{2}.\,
and so it is extremely impractical for large networks. A two-node network is technically a fully connected network.
Partially connected

Partially connected mesh topology

The type of network topology in which some of the nodes of the network are connected to more than one other node in the network with a point-to-point link – this makes it possible to take advantage of some of the redundancy that is provided by a physical fully connected mesh topology without the expense and complexity required for a connection between every node in the network.

Tree[edit]

A tree topology is essentially a combination of bus topology and star topology. The nodes of bus topology are replaced with standalone star topology networks. This results in both disadvantages of bus topology and advantages of star topology.

For example, if the connection between two groups of networks is broken down due to breaking of the connection on the central linear core, then those two groups cannot communicate, much like nodes of a bus topology. However, the star topology nodes will effectively communicate with each other.

It has a root node, intermediate nodes, and ultimate nodes. This structure is arranged in a hierarchical form and any intermediate node can have any number of the child nodes.

But the tree topology is practically impossible to construct, because the node in the network is nothing, but the computing device can have maximum one or two connections, so we cannot attach more than 2 child nodes to the computing device (or parent node).[citation needed] There are many sub structures under tree topology, but the most convenient is B-tree topology whereby finding errors is relatively easy.[citation needed]

Many supercomputers use a fat tree network,[6] including the Yellowstone (supercomputer), the Tianhe-2, the Meiko Scientific CS-2, the Earth Simulator, the Cray X2, the CM-5, and many Altix supercomputers.

  1. A network that is based upon the physical hierarchical topology must have at least three levels in the hierarchy of the tree, since a network with a central ‘root’ node and only one hierarchical level below it would exhibit the physical topology of a star.
  2. A network that is based upon the physical hierarchical topology and with a branching factor of 1 would be classified as a physical linear topology.
  3. The branching factor, f, is independent of the total number of nodes in the network and, therefore, if the nodes in the network require ports for connection to other nodes the total number of ports per node may be kept low even though the total number of nodes is large; – this makes the effect of the cost of adding ports to each node totally dependent upon the branching factor and may therefore be kept as low as required without any effect upon the total number of nodes that are possible.
  4. The total number of point-to-point links in a network that is based upon the physical hierarchical topology will be one less than the total number of nodes in the network.
  5. If the nodes in a network that is based upon the physical hierarchical topology are required to perform any processing upon the data that is transmitted between nodes in the network, the nodes that are at higher levels in the hierarchy will be required to perform more processing operations on behalf of other nodes than the nodes that are lower in the hierarchy. Such a type of network topology is very useful and highly recommended.
Advantages

  • It is scalable. Secondary nodes allow more devices to be connected to a central node.
  • Point to point connection of devices.
  • Having different levels of the network makes it more manageable hence easier fault identification and isolation.
An example of this network could be cable TV technology. Other examples are in dynamic tree based wireless networks for military, mining and otherwise mobile applications.
Disadvantages

  • Maintenance of the network may be an issue when the network spans a great area.
  • Since it is a variation of bus topology, if the backbone fails, the entire network is crippled.
An example of this network could be cable TV technology. Other examples are in dynamic tree based wireless networks for military, mining and otherwise mobile applications.[7] The Naval Postgraduate School, Monterey CA, demonstrated such tree based wireless networks for border security.[8] In a pilot system, aerial cameras kept aloft by balloons relayed real time high resolution video to ground personnel via a dynamic self healing tree based network.

Hybrid[edit]

Hybrid networks use a combination of any two or more topologies, in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network connected to a tree network is still a tree network topology. A hybrid topology is always produced when two different basic network topologies are connected. Two common examples for Hybrid network are: star ring network and star bus network

  • A star ring network consists of two or more ring topologies connected using a multistation access unit (MAU) as a centralized hub.
  • A Star Bus network consists of two or more star topologies connected using a bus trunk (the bus trunk serves as the network’s backbone).

While grid and torus networks have found popularity in high-performance computing applications, some systems have used genetic algorithms to design custom networks that have the fewest possible hops in between different nodes. Some of the resulting layouts are nearly incomprehensible, although they function quite well.[citation needed]

A Snowflake topology is really a “Star of Stars” network, so it exhibits characteristics of a hybrid network topology but is not composed of two different basic network topologies being connected.

Daisy chain[edit]

Except for star-based networks, the easiest way to add more computers into a network is by daisy-chaining, or connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring.

  • A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters.
  • By connecting the computers at each end, a ring topology can be formed. An advantage of the ring is that the number of transmitters and receivers can be cut in half, since a message will eventually loop all of the way around. When a node sends a message, the message is processed by each computer in the ring. If the ring breaks at a particular link then the transmission can be sent via the reverse path thereby ensuring that all nodes are always connected in the case of a single failure.

Centralization[edit]

The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such asEthernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes.

If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems.

A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed.

As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest.

To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. These network switches will “learn” the layout of the network by “listening” on each port during normal data transmission, examining the data packets and recording the address/identifier of each connected node and which port it is connected to in a lookup table held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only.

Decentralization[edit]

In a mesh topology (i.e., a partially connected mesh topology), there are at least two nodes with two or more paths between them to provide redundant paths to be used in case the link providing one of the paths fails. This decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful. In 2012 the IEEE published the Shortest path bridging protocol to ease configuration tasks and allows all paths to be active which increases bandwidth and redundancy between all devices.[9][10][11][12][13]

This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has a toroidal topology, for instance.

A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are n(n-1)/2 direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications.

See also[edit]

IT in Various Fields

Computer technology has made several important impacts on our society. Today computer is playing very important role in every field of life. Many activities in daily life can be performed very easily and qickly. A lot of time is saved and overall cost is reduced to solve a particular problem. Many fields where computer are widely used
FIELDS IN :

1- Business
Today, in global markets, it is impossible to run the business without the use of computer technology. Many business activities are performed very quickly and efficiently by using computers. The administrative paperwork is also reduced by using computers. Many business use websites to sell their products

and contact their customers.

2- Education
Computers are used in teaching and research. The students can solve different kinds of problems quickly and efficiently by using computers. They can also collect different information on the Internet.

3- Banks
Computers are widely used in banks. They are used in banks for record keeping and maintaining accounts of customers. Most of the banks provide the facility of ATMs. The customers can draw money through ATM card
from any branch of that bank (or another bank)
at any time of a day.

4- Entertainment
Computers are also playing very important role for the entertainment of human beings. Nowadays, computer can be used to watch television programs on the Internet.

People can also watch movies, listen music,
and play games on the computer.
Many computer games and other
entertainment materials of different kinds are
available on the Internet.

5- Home
At home, computer is used to maintain personal records and to access much other information on the Internet. People can also use computer at home for making home budgets etc.

6- Medical
Nearly every area of the medical field uses computers. For example, computers are used for maintaining patient history & other records. They are also used for patient monitoring and

diagnosis of diseases etc.

 

HISTORY OF COMPUTER TECHNOLOGY

History of computer technology

Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick.[7] The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered to be the earliest known mechanical analog computer, and the earliest known geared mechanism.[8] Comparable geared devices did not emerge in Europe until the 16th century,[9] and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed.[10]

Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world’s first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. Colossus, developed during the Second World War to decrypt German messages was the first electronic digital computer. Although it was programmable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring.[11] The first recognisably modern electronic digital stored-program computer was the Manchester Small-Scale Experimental Machine (SSEM), which ran its first program on 21 June 1948.[12]

The development of transistors in the late 1940s at Bell Laboratories allowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, the Ferranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison the first transistorised computer, developed at the University of Manchester and operational by November 1953, consumed only 150 watts in its final version.[13]

Data storage[edit]

Main article: Data storage device

Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete.[14] Electronic data storage, which is used in modern computers, dates from World War II, when a form of delay line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line.[15]The first random-access digital storage device was the Williams tube, based on a standard cathode ray tube,[16] but the information stored in it and delay line memory was volatile in that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932[17] and used in the Ferranti Mark 1, the world’s first commercially available general-purpose electronic computer.[18]

IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system.[19] Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs.[20]Until 2002 most information was stored on analog devices, but that year digital storage capacity exceeded analog for the first time. As of 2007 almost 94% of the data stored worldwide was held digitally:[21] 52% on hard disks, 28% on optical devices and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007,[22] doubling roughly every 3 years.[23]

Databases[edit]

Main article: Database

Database management systems emerged in the 1960s[24] to address the problem of storing and retrieving large amounts of data accurately and quickly. One of the earliest such systems was IBM‘s Information Management System (IMS),[24] which is still widely deployed more than 40 years later.[25] IMS stores data hierarchically,[24] but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows and columns. The first commercially available relational database management system (RDBMS) was available from Oracle in 1980.[26]

All database management systems consist of a number of components that together allow the data they store to be accessed simultaneously by many users while maintaining its integrity. A characteristic of all databases is that the structure of the data they contain is defined and stored separately from the data itself, in a database schema.[24]

The extensible markup language (XML) has become a popular format for data representation in recent years. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their “robust implementation verified by years of both theoretical and practical effort”.[27] As an evolution of the Standard Generalized Markup Language (SGML), XML’s text-based structure offers the advantage of being both machine and human-readable.[28]

Data retrieval[edit]

The relational database model introduced a programming-language independent Structured Query Language (SQL), based on relational algebra.[26]

The terms “data” and “information” are not synonymous. Anything stored is data, but it only becomes information when it is organized and presented meaningfully.[29] Most of the world’s digital data is unstructured, and stored in a variety of different physical formats[30][b] even within a single organization. Data warehouses began to be developed in the 1980s to integrate these disparate stores. They typically contain data extracted from various sources, including external sources such as the Internet, organized in such a way as to facilitate decision support systems (DSS).[31]

Data transmission[edit]

Data transmission has three aspects: transmission, propagation, and reception.[32] It can be broadly categorized as broadcasting, in which information is transmitted unidirectionally downstream, ortelecommunications, with bidirectional upstream and downstream channels.[22]

XML has been increasingly employed as a means of data interchange since the early 2000s,[33] particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP,[28]describing “data-in-transit rather than … data-at-rest”.[33] One of the challenges of such usage is converting data from relational databases into XML Document Object Model (DOM) structures.[34]

Data manipulation[edit]

Hilbert and Lopez identify the exponential pace of technological change (a kind of Moore’s law): machines’ application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world’s general-purpose computers doubled every 18 months during the same two decades; the global telecommunication capacity per capita doubled every 34 months; the world’s storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years.[22]

Massive amounts of data are stored worldwide every day, but unless it can be analysed and presented effectively it essentially resides in what have been called data tombs: “data archives that are seldom visited”.[35]To address that issue, the field of data mining – “the process of discovering interesting patterns and knowledge from large amounts of data”[36] – emerged in the late 1980s.[37]

Academic perspective[edit]

In an academic context, the Association for Computing Machinery defines IT as “undergraduate degree programs that prepare students to meet the computer technology needs of business, government, healthcare, schools, and other kinds of organizations …. IT specialists assume responsibility for selecting hardware and software products appropriate for an organization, integrating those products with organizational needs and infrastructure, and installing, customizing, and maintaining those applications for the organization’s computer users.”[38]

Commercial and employment perspective[edit]

In a business context, the Information Technology Association of America has defined information technology as “the study, design, development, application, implementation, support or management of computer-based information systems“.[39] The responsibilities of those working in the field include network administration, software development and installation, and the planning and management of an organization’s technology life cycle, by which hardware and software are maintained, upgraded and replaced.

The business value of information technology lies in the automation of business processes, provision of information for decision making, connecting businesses with their customers, and the provision of productivity tools to increase efficiency.

Worldwide IT spending forecast[40] (billions of U.S. dollars)
Category 2014 spending 2015 spending
Devices 685 725
Data center systems 140 144
Enterprise software 321 344
IT services 967 1,007
Telecom services 1,635 1,668
Total 3,748 3,888

Ethical perspective[edit]

Main article: Information ethics

The field of information ethics was established by mathematician Norbert Wiener in the 1940s.[42] Some of the ethical issues associated with the use of information technology include:[43]

  • Breaches of copyright by those downloading files stored without the permission of the copyright holders
  • Employers monitoring their employees’ emails and other Internet usage
  • Unsolicited emails
  • Hackers accessing online databases
  • Web sites installing cookies or spyware to monitor a user’s online activities

WHAT IS IT OR INFORMATION TECHNOLOGY ???

Information technology


Information technology
(IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data,[1] often in the context of a business or other enterprise.[2]

The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, engineering, healthcare, e-commerce and computer services.[3][a]

Humans have been storing, retrieving, manipulating and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC,[5] but the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that “the new technology does not yet have a single established name. We shall call it information technology (IT).” Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.[6]

Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940) and electronic (1940–present).[5] This article focuses on the most recent period (electronic), which began in about 1940.

History Of Computer Science.

History of Computer Science

Charles Babbage is credited with inventing the first mechanical computer.

Ada Lovelace is credited with writing the first algorithmintended for processing on a computer.

The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as theabacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. The ancient Sanskrit treatise Shulba Sutras, or “Rules of the Chord”, is a book of algorithms written in 800 BC for constructing geometric objects like altars using a peg and chord, an early precursor of the modern field of computational geometry.

Blaise Pascal designed and constructed the first working mechanical calculator, Pascal’s calculator, in 1642.[2] In 1673 Gottfried Leibniz demonstrated a digital mechanical calculator, called the ‘Stepped Reckoner‘.[3] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry[note 1] when he released his simplified arithmometer, which was the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his difference engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine.[4] He started developing this machine in 1834 and “in less than two years he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a punched card system derived from the Jacquard loom”[5] making it infinitely programmable.[note 2] In 1843, during the translation of a French article on the analytical engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first computer program.[6] Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. In 1937, one hundred years after Babbage’s impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[7] to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage’s analytical engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as “Babbage’s dream come true”.[8]

During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.[9] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s.[10][11] The world’s first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.[12] Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.

Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[13][14] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704[15] and later the IBM 709[16] computers, which were widely used during the exploration period of such devices. “Still, working with the IBM [computer] was frustrating … if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again”.[13] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.[14]

Time has seen significant improvements in the usability and effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base. Initially, computers were quite costly, and some degree of human aid was needed for efficient use – in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage.

Contributions[edit]

The German military used the Enigma machine (shown here) during World War II for communications they wanted kept secret. The large-scale decryption of Enigma traffic atBletchley Park was an important factor that contributed to Allied victory in WWII.[17]

Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society – in fact, along with electronics, it is a founding science of the current epoch of human history called the Information Age and a driver of the Information Revolution, seen as the third major leap in human technological progress after the Industrial Revolution (1750-1850 CE) and the Agricultural Revolution (8000-5000 BC).

These contributions include:

Philosophy[edit]

A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics.[23] Peter Denning‘s working group argued that they are theory, abstraction (modeling), and design.[24] Amnon H. Eden described them as the “rationalist paradigm” (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the “technocratic paradigm” (which might be found in engineering approaches, most prominently in software engineering), and the “scientific paradigm” (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).[25]

Name of the field[edit]

Although first proposed in 1956,[14] the term “computer science” appears in a 1959 article in Communications of the ACM,[26] in which Louis Fein argues for the creation of a Graduate School in Computer Sciencesanalogous to the creation of Harvard Business School in 1921,[27] justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.[26] His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such programs, starting with Purdue in 1962.[28] Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed.[29] Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy,[30] to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a distinct field of data analysis, including statistics and databases.

Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACMturingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[31] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[32] The term computics has also been suggested.[33] In Europe, terms derived from contracted translations of the expression “automatic information” (e.g. “informazione automatica” in Italian) or “information and mathematics” are often used, e.g. informatique (French), Informatik(German), informatica (Italy, The Netherlands), informática (Spain, Portugal), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh).[34]

A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that “computer science is no more about computers than astronomy is about telescopes.”[note 3] The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, biology, statistics, and logic.

Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[10] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such asmathematical logic, category theory, domain theory, and algebra.[14]

The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term “software engineering” means, and how computer science is defined.[35] David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[36]

The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.

Areas of computer science[edit]

Further information: Outline of computer science

As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[37][38] CSAB, formerly called Computing Sciences Accreditation Board – which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE-CS)[39] – identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and telecommunications, database systems, parallel computation, distributed computation, computer-human interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[37]

Theoretical computer science[edit]

The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.

Theory of computation[edit]

Main article: Theory of computation

According to Peter J. Denning, the fundamental question underlying computer science is, “What can be (efficiently) automated?”[10] The study of the theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.

The famous “P=NP?” problem, one of the Millennium Prize Problems,[40] is an open problem in the theory of computation.

DFAexample.svg Wang tiles.png P = NP ? GNITIRW-TERCES Blochsphere.svg
Automata theory Computability theory Computational complexity theory Cryptography Quantum computing theory

Information and coding theory[edit]

Information theory is related to the quantification of information. This was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.[41] Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.

Algorithms and data structures[edit]

Algorithms and data structures is the study of commonly used computational methods and their computational efficiency.

O(n^2) Sorting quicksort anim.gif Singly linked list.png TSP Deutschland 3.png SimplexRangeSearching.png
Analysis of algorithms Algorithms Data structures Combinatorial optimization Computational geometry

Programming language theory[edit]

Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering and linguistics. It is an active research area, with numerous dedicated academic journals.

\Gamma\vdash x: \text{Int} Compiler.svg Python add5 syntax.svg
Type theory Compiler design Programming languages

Formal methods[edit]

Main article: Formal methods

Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safetyor security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.

Applied computer science[edit]

Applied computer science aims at identifying certain computer science concepts that can be used directly in solving real world problems.

Artificial intelligence[edit]

This branch of computer science aims to or is required to synthesise goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence (AI) research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting-point in the late 1940s was Alan Turing‘s question “Can computers think?”, and the question remains effectively unanswered although the “Turing test” is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.

Nicolas P. Rougier's rendering of the human brain.png Human eye, rendered from Eye.png Corner.png
Machine learning Computer vision Image processing
KnnClassification.svg Julia iteration data.png Sky.png
Pattern recognition Data mining Evolutionary computation
Neuron.svg English.png HONDA ASIMO.jpg
Knowledge representation Natural language processing Robotics

Computer architecture and engineering[edit]

Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.[42] The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnecting hardware components to create computers that meet functional, performance, and cost goals.

NOR ANSI.svg Fivestagespipeline.png SIMD.svg
Digital logic Microarchitecture Multiprocessing
Roomba original.jpg Flowchart.png Operating system placement.svg
Ubiquitous computing Systems architecture Operating systems

Computer performance analysis[edit]

Main article: Computer performance

Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.[43]

Computer graphics and visualization[edit]

Computer graphics is the study of digital visual contents, and involves synthese and manipulations of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.

Computer security and cryptography[edit]

Main articles: Computer security and Cryptography

Computer security is a branch of computer technology, whose objective includes protection of information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.

Computational science[edit]

Computational science (or scientific computing) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientificproblems. In practical use, it is typically the application of computer simulation and other forms of computation to problems in various scientific disciplines.

Lorenz attractor yb.svg Quark wiki.jpg Naphthalene-3D-balls.png 1u04-argonaute.png
Numerical analysis Computational physics Computational chemistry Bioinformatics

Computer networks[edit]

Main article: Computer network

This branch of computer science aims to manage networks between computers worldwide.

Concurrent, parallel and distributed systems[edit]

Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. A distributed system extends the idea of concurrency onto multiple computers connected through a network. Computers within the same distributed system have their own private memory, and information is often exchanged among themselves to achieve a common goal.

Databases[edit]

Main article: Database

A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, throughdatabase models and query languages.

Software engineering[edit]

Main article: Software engineering

Software engineering is the study of designing, implementing, and modifying software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software— it doesn’t just deal with the creation or manufacture of new software, but its internal maintenance and arrangement. Both computer applications software engineers and computer systems software engineers are projected to be among the fastest growing occupations from 2008 and 2018.

The great insights of computer science[edit]

The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science[44]

All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as “on/off”, “magnetized/de-magnetized”, “high-voltage/low-voltage”, etc.).
See also: Digital physics
  • Alan Turing‘s insight: there are only five actions that a computer has to perform in order to do “anything”.
Every algorithm can be expressed in a language for a computer consisting of only five basic instructions:

  • move left one location;
  • move right one location;
  • read symbol at current location;
  • print 0 at current location;
  • print 1 at current location.
See also: Turing machine
  • Corrado Böhm and Giuseppe Jacopini‘s insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do “anything”.
Only three rules are needed to combine any set of basic instructions into more complex ones:

  • sequence: first do this, then do that;
  • selection: IF such-and-such is the case, THEN do this, ELSE do that;
  • repetition: WHILE such-and-such is the case DO this.
Note that the three rules of Boehm’s and Jacopini’s insight can be further simplified with the use of goto (which means it is more elementary than structured programming).

Academia[edit]

Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications.[45][46] One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.[47]

Education[edit]

Since computer science is a relatively new field, it is not as widely taught in schools and universities as other academic subjects. For example, in 2014, Code.org estimated that only 10 percent of high schools in the United States offered computer science education.[48] A 2010 report by Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA) revealed that only 14 out of 50 states have adopted significant education standards for high school computer science.[49] However, computer science education is growing. Some countries, such as Israel, New Zealand and South Korea, have already included computer science in their respective national secondary education curriculum.[50][51] Several countries are following suit.[52]

In most countries, there is a significant gender gap in computer science education. For example, in the U.S. about 20% of computer science degrees in 2012 were conferred to women.[53] This gender gap also exists in other Western countries.[54] However, in some parts of the world, the gap is small or nonexistent. In 2011, approximately half of all computer science degrees in Malaysia were conferred to women.[55] In 2001, women made up 54.5% of computer science graduates in Guyana.[54]

WHAT IS SCIENCE COMPUTER ?????

THIS IS COMPUTER SCIENCE. READ……….
large capital lambda Plot of a quicksort algorithm
Utah teapot representing computer graphics Microsoft Tastenmaus mouse representing human-computer interaction
Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations

Computer science is the scientific and practical approach to computation and its applications. It is the systematic study of the feasibility, structure, expression, and mechanization of the methodical procedures (or algorithms) that underlie the acquisition, representation, processing, storage, communication of, and access toinformation. An alternate, more succinct definition of computer science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems.[1]

Its sub fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory (which explores the fundamental properties of computational and intractable problems), are highly abstract, while fields such as computer graphics emphasize real-world visual applications. Still other fields focus on the challenges in implementing computation. For example, programming language theory considers various approaches to the description of computation, while the study of computer programming itself investigates various aspects of the use of programming language and complex systems. Human–computer interaction considers the challenges in making computers and computations useful, usable, and universally accessible to humans.

NAK A SAINS COMPUTER ? BACA NI. INSYAALLAH.