Студопедия

КАТЕГОРИИ:

АвтоАвтоматизацияАрхитектураАстрономияАудитБиологияБухгалтерияВоенное делоГенетикаГеографияГеологияГосударствоДомЖурналистика и СМИИзобретательствоИностранные языкиИнформатикаИскусствоИсторияКомпьютерыКулинарияКультураЛексикологияЛитератураЛогикаМаркетингМатематикаМашиностроениеМедицинаМенеджментМеталлы и СваркаМеханикаМузыкаНаселениеОбразованиеОхрана безопасности жизниОхрана ТрудаПедагогикаПолитикаПравоПриборостроениеПрограммированиеПроизводствоПромышленностьПсихологияРадиоРегилияСвязьСоциологияСпортСтандартизацияСтроительствоТехнологииТорговляТуризмФизикаФизиологияФилософияФинансыХимияХозяйствоЦеннообразованиеЧерчениеЭкологияЭконометрикаЭкономикаЭлектроникаЮриспунденкция

The Centripetal City: Telecommunications, the Internet, and the Shaping of the Modern Urban Environment




Definition of a Network

The word network is used generally to mean a set of computers that are connected together in such a way as to permit them to communicate and share information. The word network can have various more specific meanings depending on the context in which the word is used. The Internet is a network, so is the LAN at the office or your school. Speaking in the most general terms, the networks at your school, work and the Internet are networks, but they aren't necessarilly part of the same network. What is and is not part of a network is often defined by who owns and operates the equipment and the computers that are part of the network. Thus, your school's network is separate from the Internet.

You know you have a network when you have two or more computers connected together and they are able to communicate. Plugged into the back of each computer (end station) are some sort of communications port. Most desktop computers today have serial ports, parallel ports, ethernet ports, modem ports, firewire ports, USB ports and more. All of these ports have been used in one way or another to connect computers to a network.

Xerox was the first company to start research and development on networks. They knew their printers were expensive and users were only able to print from one big computer (a mainframe) attached to the printer directly. Xerox decided that they could sell more printers if they could make it possible for anyone to use the printer from any computer if there was a communications link from all computers to the printer, so Xerox put Bob Metcalf and others to work on researching and designing what eventually came to be called ethernet.

Hosts, End Stations and Workstations

When people talk about networks, they often refer to computers that are at the edge of the network as hosts, end stations or workstations. Its all just the same thing, a computer attached to the network; though the word HOST has the most general meaning and can include anything attached to the network including hubs, bridges, switches, routers, access points, firewalls, workstations, servers, mainframes, printers, scanners, copiers, fax machines and more!

Just about everything electronic that has a processor and which you would use in an office is 'network capable' today and lots of things that aren't currently networked probably will be networked in the future. Yes, in many offices the phone system already IS the network (Voice over IP).

LAN, MAN, WAN and er.. IPAN??

There are some terms that are used to describe the size and scope of a network: LAN, WAN, MAN. We've added our own term 'IPAN'

A Local Area Network (LAN) is usually a single set of connected computers that are in a single small location such as a room, a floor of a building, or the whole building.

A Metropolitan Area Network (MAN) is a network that encompasses a city or town. It is usually multiple point-to-point fiber-optic connections put together by a communications company and leased to their customers, but a small number of big corporations have built a few of these of their own and opened them to the local companies with which they do business. The automotive, travel and insurance industries are just a few examples of who has built a WAN.

A Wide Area Network (WAN) is usually composed of all the links that connect the buildings of a campus together, such as at a University or at a corporate headquarters. WAN connections can often span miles, so you frequently hear peole referring to the 'WAN' connection to an office half way around the world. Usually, what distinguishes a WAN from a LAN is that there are one or more links that span a large distance over serial, T-carrier or ISDN, Frame Relay or ATM links.

So what the heck is an IPAN? An Inter-Planetary Area Network. The mechanical rovers Spirit and Opportunity on the planet Mars, have IP addresses on a NASA network and NASA uses Internet protocols to communicate with the rover (probably UDP). While the communication with the Spirit rover doesn't actually get transmitted over the Internet, the NASA network does have hosts spanning between earth and Mars.

 Physical Network Topologies

The hardware used to build the network will usually require that the structure of the network conform to a certain design. The word topology is used to describe what the network looks like when drawn on paper and to a large extent, how it operates.

Bus Topology

A bus topology connects all computers together using a single wire, usually a piece of coaxial cable, that passes electricity over a copper core that all devices transmit and receive from. All devices hear all communication over the bus.

Ring Topology

A ring topology usually involves connecting one or more computers together using paired physical interfaces. One interface is the clockwise side of the ring, the other connection is the counter-clockwise side of the ring. Devices connected to the ring can transmit and receive, but there is usually some other sort of method for controlling access to the common network hardware. Token Ring uses a ring topology as does CDDI and FDDI. All three of these network technologies use a token-passing scheme in which the computer holding the the token is allowed to transmit.

Star Topology

A star topology is the most common network topology in use today. All devices in the network are connected to a single hub or repeater. The connected devices radiate outward from the hub like an asterisk '*' or star.

Hub and Spoke Topology

Hub and spoke is another term often used to describe a star topology.

Point to Point Topology (Daisy Chaining)

A point-to-point topology is most often a communications connection between two devices over a single hardware connection that is not shared by any other devices. There will be exactly two and only two devices on the connection. Networks using point-to-point topologies can be daisy-chained together to form an end-to-end communications path.

Point to Multipoint

A single connection point on the network has network segments that run to several other points.

Logical Network Topologies

Peer-to-Peer

A peer-to-peer network is composed of two or more self-sufficient computers. Each computer handles all functions, logging in, storage, providing a user interface etc. The computers on a peer-to-peer network can communicate, but do not need the resources or services available from the other computers on the network. Peer-to-peer is the opposite of the client-server logical network model.

A Microsoft Windows Workgroup is one example of a peer-to-peer network. UNIX servers running as stand-alone systems are also a peer-to-peer network. Logins, services and files are local to the computer. You can only access resources on other peer computers if you have logins on the peer computers.

Client - Server

The simplest client-server network is composed of a server and one or more clients. The server provides a service that the client computer needs. Clients connect to the server across the network in order to access the service. A server can be a piece of software running on a computer, or it can be the computer itself.

One of the simplest examples of client-server is a File Transfer Protocol (FTP) session. File Transfer Protocol (FTP) is a protocol and service that allows your computer to get or put files to a second computer using a network connection. A computer running FTP software opens a session to an FTP server to download or upload a file. The FTP server is providing file storage services over the network. Because it is providing file storage services, it is said to be a 'file server'. A client software application is required to access the FTP service running on the file server.

Most computer networks today control logins on all machines from a centralized logon server. When you sit down to a computer and type in your username and password, your username and password are sent by the computer to the logon server. UNIX servers use NIS, NIS+ or LDAP to provide these login services. Microsoft Windows comptuers use Active Directory and Windows Logon and/or an LDAP client.

Users on a client-server network will usually only need one login to access resources on the network.

Distributed Services

Computer networks using distributed services provide those services to client computers, but not from a centralized server. The services are running on more than one computer and some or all of the functions provided by the service are provided by more than one server.

The simplest example of a distributed service is Domain Name Service (DNS) which performs the function of turning human-understandable names into computer numbers called IP addresses. Whenever you browse a web page, your computer uses DNS. Your computer sends a DNS request to your local DNS server. That local server will then go to a remote server on the Internet called a "DNS Root Server" to begin the lookup process. This Root Server will then direct your local DNS server to the owner of the domain name the website is a part of. Thus, there are at least three DNS servers involved in the process of finding and providing the IP address of the website you intended to browse. Your local DNS server provides the query functions and asks other servers for information. The Root DNS server tells your local DNS server where to find an answer. The DNS server that 'owns' the domain of the website you are trying to browse tells your local DNS server the correct IP address. Your computer stores that IP address in its own local DNS cache. Thus, DNS is a distributed service that runs everywhere, but no one computer can do the job by itself.

 Communication Methods

1. Point-to-point

2. Broadcast, multiple access

3. Broadcast, non-multiple access

4. Point-to-multipoint

Network Technologies

1. Repeater (Hub based)

2. Bridging (Bridge based)

3. Switching (Switch based)

4. Routing (multiple networks)

 

 http://www.inetdaemon.com/tutorials/computers/hardware/cpu/

                             

                                   Computer programming

Computer programming is a field that has to do with the analytical creation of source code that can be used to configure computer systems. Computer programmers may choose to function in a broad range of programming functions, or specialize in some aspect of development, support, or maintenance of computers for the home or workplace. Programmers provide the basis for the creation and ongoing function of the systems that many people rely upon for all sorts of information exchange, both business related and for entertainment purposes.

The computer programmer often focuses on the development of software that allows people to perform a broad range of functions. All online functions that are utilized in the home and office owe their origins to a programmer or group of programmers. Computer operating systems, office suites, word processing programs, and even Internet dialing software all exist because of the work of programmers.

Computer programming goes beyond software development. The profession also extends to the adaptation of software for internal use, and the insertion of code that allows a program to be modified for a function that is unique to a given environment. When this is the case, the computer programmer may be employed with a company that wishes to use existing software as the foundation for a customized platform that will be utilized as part of the company intranet.

A third aspect of computer programming is the ongoing maintenance of software that is currently running as part of a network. Here, the programmer may work hand in hand with other information technology specialists to identify issues with current programs, and take steps to adapt or rewrite sections of code in order to correct a problem or enhance a function in some manner.

Related topics

Computer

Computer Components

Cheap Computer

Computer Hardware

Buy Computer

Computer Parts

Used Computer

In short, computer programming is all about developing, adapting, and maintaining all the programs that many of us rely upon for both work and play. Programmers are constantly in demand for all of these three functions, since businesses and individuals are always looking for new and better ways to make use of computer technology for all sorts of tasks. With this in mind, computer programming is a very stable profession to enter, and provides many different possibilities of employment opportunities.

 

http://www.wisegeek.com/what-is-computer-programming.htm

                                      Computer programming

Computer programming (often shortened to programming or coding) is the process of designing, writing, testing, debugging / troubleshooting, and maintaining the source code of computer programs. This source code is written in a programming language. The code may be a modification of an existing source or something completely new. The purpose of programming is to create a program that exhibits a certain desired behaviour (customization). The process of writing source code often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.

Overview

Within software engineering, programming (the implementation) is regarded as one phase in a software development process.

There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an engineering discipline. In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution (the criteria for "efficient" and "evolvable" vary considerably). The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers." However, representing oneself as a "Professional Software Engineer" without a license from an accredited institution is illegal in many parts of the world.[citation needed] However, because the discipline covers many areas, which may or may not include critical applications, it is debatable whether licensing is required for the profession as a whole. In most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined (e.g. United States Air Force use of AdaCore and security clearance).

Another ongoing debate is the extent to which the programming language used in writing computer programs affects the form that the final program takes. This debate is analogous to that surrounding the Sapir-Whorf hypothesis in linguistics, that postulates that a particular language's nature influences the habitual thought of its speakers. Different language patterns yield different patterns of thought. This idea challenges the possibility of representing the world perfectly with language, because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community.

Said another way, programming is the craft of transforming requirements into something that a computer can execute.

History of programming

Wired plug board for an IBM 402 Accounting Machine.

The concept of devices that operate following a pre-defined set of instructions traces back to Greek Mythology, notably Hephaestus, the Greek Blacksmith God, and his mechanical slaves. The Antikythera mechanism from ancient Greece was a calculator utilizing gears of various sizes and configuration to determine its operation. Al-Jazari built programmable Automata in 1206. One system employed in these devices was the use of pegs and cams placed into a wooden drum at specific locations. which would sequentially trigger levers that in turn operated percussion instruments. The output of this device was a small drummer playing various rhythms and drum patterns. The Jacquard Loom, which Joseph Marie Jacquard developed in 1801, uses a series of pasteboard cards with holes punched in them. The hole pattern represented the pattern that the loom had to follow in weaving cloth. The loom could produce entirely different weaves using different sets of cards. Charles Babbage adopted the use of punched cards around 1830 to control his Analytical Engine. The synthesis of numerical calculation, predetermined operation and output, along with a way to organize and input instructions in a manner relatively easy for humans to conceive and produce, led to the modern development of computer programming. Development of computer programming accelerated through the Industrial Revolution.

In the late 1880s, Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards..." To process these punched cards, first known as "Hollerith cards" he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. In 1896 he founded the Tabulating Machine Company (which later became the core of IBM). The addition of a control panel (plugboard) to his 1906 Type I Tabulator allowed it to do different jobs without having to be physically rebuilt. By the late 1940s, there were a variety of plug-board programmable machines, called unit record equipment, to perform data-processing tasks (card reading). Early computer programmers used plug-boards for the variety of complex calculations requested of the newly invented machines.

 

Data and instructions could be stored on external punched cards, which were kept in order and arranged in program decks.

The invention of the von Neumann architecture allowed computer programs to be stored in computer memory. Early programs had to be painstakingly crafted using the instructions (elementary operations) of the particular machine, often in binary notation. Every model of computer would likely use different instructions (machine language) to do the same task. Later, assembly languages were developed that let the programmer specify each instruction in a text format, entering abbreviations for each operation code instead of a number and specifying addresses in symbolic form (e.g., ADD X, TOTAL). Entering a program in assembly language is usually more convenient, faster, and less prone to human error than using machine language, but because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets also have different assembly languages.

In 1954, FORTRAN was invented; it was the first high level programming language to have a functional implementation, as opposed to just a design on paper. (A high-level language is, in very general terms, any programming language that allows the programmer to write programs in terms that are more abstract than assembly language instructions, i.e. at a level of abstraction "higher" than that of an assembly language.) It allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, is converted into machine instructions using a special program called a compiler, which translates the FORTRAN program into machine language. In fact, the name FORTRAN stands for "Formula Translation". Many other languages were developed, including some for commercial programming, such as COBOL. Programs were mostly still entered using punched cards or paper tape. (See computer programming in the punch card era). By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. (Usually, an error in punching a card meant that the card had to be discarded and an new one punched to replace it.)

As time has progressed, computers have made giant leaps in the area of processing power. This has brought about newer programming languages that are more abstracted from the underlying hardware. Although these high-level languages usually incur greater overhead, the increase in speed of modern computers has made the use of these languages much more practical than in the past. These increasingly abstracted languages typically are easier to learn and allow the programmer to develop applications much more efficiently and with less source code. However, high-level languages are still impractical for a few programs, such as those where low-level hardware control is necessary or where maximum processing speed is vital.

Throughout the second half of the twentieth century, programming was an attractive career in most developed countries. Some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities in less developed areas. It is unclear how far this trend will continue and how deeply it will impact programmer wages and opportunities.

Modern programming

Quality requirements

Whatever the approach to software development may be, the final program must satisfy some fundamental properties. The following properties are among the most relevant:

• Efficiency/performance: the amount of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes correct disposal of some resources, such as cleaning up temporary files and lack of memory leaks.

• Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms, and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).

• Robustness: how well a program anticipates problems not due to programmer error. This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services and network connections, and user error.

• Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose, or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface.

• Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behaviour of the hardware and operating system, and availability of platform specific compilers (and sometimes libraries) for the language of the source code.

• Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or customizations, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.

Algorithmic complexity

The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problem. For this purpose, algorithms are classified into orders using so-called Big O notation, O(n), which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.

Methodologies

The first step in most formal software development projects is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of differing approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis.

Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.

A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).

Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.

Measuring language usage

It is very difficult to determine what are the most popular of modern programming languages. Some languages are very popular for particular kinds of applications (e.g., COBOL is still strong in the corporate data center, often on large mainframes, FORTRAN in engineering applications, scripting languages in web development, and C in embedded applications), while some languages are regularly used to write many different kinds of applications.

Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books teaching the language that are sold (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).

Debugging

A bug, which was debugged in 1947.

Debugging is a very important task in the software development process, because an incorrect program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static analysis tool can help detect some possible problems.

Debugging is often done with IDEs like Visual Studio, NetBeans, and Eclipse. Standalone debuggers like gdb are also used, and these often provide less of a visual environment, usually using a command line.

Programming languages

Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute.

Allen Downey, in his book How To Think Like A Computer Scientist, writes:

The details look different in different languages, but a few basic instructions appear in just about every language:

• input: Get data from the keyboard, a file, or some other device.

• output: Display data on the screen or send data to a file or other device.

• arithmetic: Perform basic arithmetical operations like addition and multiplication.

• conditional execution: Check for certain conditions and execute the appropriate sequence of statements.

• repetition: Perform some action repeatedly, usually with some variation.

Many computer languages provide a mechanism to call functions provided by libraries. Provided the functions in a library follow the appropriate run time conventions (e.g., method of passing arguments), then these functions may be written in any other language.

Programmers

Computer programmers are those who write computer software. Their jobs usually involve:

• Coding

• Compilation

• Documentation

• Integration

• Maintenance

• Requirements analysis

• Software architecture

• Software testing

• Specification

• Debugging

 http://en.wikipedia.org/wiki/Computer_programming

 

 

                                                  Technology

By the mid 20th century, humans had achieved a mastery of technology sufficient to leave the atmosphere of the Earth for the first time and explore space.

Technology is the usage and knowledge of tools, techniques, crafts, systems or methods of organization. The word technology comes from the Greek technología (τεχνολογία) — téchnē (τέχνη), an 'art', 'skill' or 'craft' and -logía (-λογία), the study of something, or the branch of knowledge of a discipline. The term can either be applied generally or to specific areas: examples include construction technology, medical technology, or state-of-the-art technology or high technology. Technologies can also be exemplified in a material product, for example an object can be termed state of the art.

Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments. The human species' use of technology began with the conversion of natural resources into simple tools. The prehistorical discovery of the ability to control fire increased the available sources of food and the invention of the wheel helped humans in travelling in and controlling their environment. Recent technological developments, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. However, not all technology has been used for peaceful purposes; the development of weapons of ever-increasing destructive power has progressed throughout history, from clubs to nuclear weapons.

Technology has affected society and its surroundings in a number of ways. In many societies, technology has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products, known as pollution, and deplete natural resources, to the detriment of the Earth and its environment. Various implementations of technology influence the values of a society and new technology often raises new ethical questions. Examples include the rise of the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the challenge of traditional norms.

Philosophical debates have arisen over the present and future use of technology in society, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar movements criticise the pervasiveness of technology in the modern world, opining that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition. Indeed, until recently, it was believed that the development of technology was restricted only to human beings, but recent scientific studies indicate that other primates and certain dolphin communities have developed simple tools and learned to pass their knowledge to other generations.

Definition and usage

The invention of the printing press made it possible for scientists and politicians to communicate their ideas with ease, leading to the Age of Enlightenment; an example of technology as a cultural force.

The use of the term technology has changed significantly over the last 200 years. Before the 20th century, the term was uncommon in English, and usually referred to the description or study of the useful arts. The term was often connected to technical education, as in the Massachusetts Institute of Technology (chartered in 1861). "Technology" rose to prominence in the 20th century in connection with the second industrial revolution. The meanings of technology changed in the early 20th century when American social scientists, beginning with Thorstein Veblen, translated ideas from the German concept of Technik into "technology." In German and other European languages, a distinction exists between Technik and Technologie that is absent in English, as both terms are usually translated as "technology." By the 1930s, "technology" referred not to the study of the industrial arts, but to the industrial arts themselves. In 1937, the American sociologist Read Bain wrote that "technology includes all tools, machines, utensils, weapons, instruments, housing, clothing, communicating and transporting devices and the skills by which we produce and use them." Bain's definition remains common among scholars today, especially social scientists. But equally prominent is the definition of technology as applied science, especially among scientists and engineers, although most social scientists who study technology reject this definition. More recently, scholars have borrowed from European philosophers of "technique" to extend the meaning of technology to various forms of instrumental reason, as in Foucault's work on technologies of the self ("techniques de soi").

Dictionaries and scholars have offered a variety of definitions. The Merriam-Webster dictionary offers a definition of the term: "the practical application of knowledge especially in a particular area" and "a capability given by the practical application of knowledge". Ursula Franklin, in her 1989 "Real World of Technology" lecture, gave another definition of the concept; it is "practice, the way we do things around here". The term is often used to imply a specific field of technology, or to refer to high technology or just consumer electronics, rather than technology as a whole. Bernard Stiegler, in Technics and Time, 1, defines technology in two ways: as "the pursuit of life by means other than life", and as "organized inorganic matter."

Technology can be most broadly defined as the entities, both material and immaterial, created by the application of mental and physical effort in order to achieve some value. In this usage, technology refers to tools and machines that may be used to solve real-world problems. It is a far-reaching term that may include simple tools, such as a crowbar or wooden spoon, or more complex machines, such as a space station or particle accelerator. Tools and machines need not be material; virtual technology, such as computer software and business methods, fall under this definition of technology.

The word "technology" can also be used to refer to a collection of techniques. In this context, it is the current state of humanity's knowledge of how to combine resources to produce desired products, to solve problems, fulfill needs, or satisfy wants; it includes technical methods, skills, processes, techniques, tools and raw materials. When combined with another term, such as "medical technology" or "space technology", it refers to the state of the respective field's knowledge and tools. "State-of-the-art technology" refers to the high technology available to humanity in any field.

Technology can be viewed as an activity that forms or changes culture. Additionally, technology is the application of math, science, and the arts for the benefit of life as it is known. A modern example is the rise of communication technology, which has lessened barriers to human interaction and, as a result, has helped spawn new subcultures; the rise of cyberculture has, at its basis, the development of the Internet and the computer. Not all technology enhances culture in a creative way; technology can also help facilitate political oppression and war via tools such as guns. As a cultural activity, technology predates both science and engineering, each of which formalize some aspects of technological endeavor.

 

Science, engineering and technology

The distinction between science, engineering and technology is not always clear. Science is the reasoned investigation or study of phenomena, aimed at discovering enduring principles among elements of the phenomenal world by employing formal techniques such as the scientific method. Technologies are not usually exclusively products of science, because they have to satisfy requirements such as utility, usability and safety.

Engineering is the goal-oriented process of designing and making tools and systems to exploit natural phenomena for practical human means, often (but not always) using results and techniques from science. The development of technology may draw upon many fields of knowledge, including scientific, engineering, mathematical, linguistic, and historical knowledge, to achieve some practical result.

Technology is often a consequence of science and engineering — although technology as a human activity precedes the two fields. For example, science might study the flow of electrons in electrical conductors, by using already-existing tools and knowledge. This new-found knowledge may then be used by engineers to create new tools and machines, such as semiconductors, computers, and other forms of advanced technology. In this sense, scientists and engineers may both be considered technologists; the three fields are often considered as one for the purposes of research and reference.

The exact relations between science and technology in particular have been debated by scientists, historians, and policymakers in the late 20th century, in part because the debate can inform the funding of basic and applied science. In the immediate wake of World War II, for example, in the United States it was widely considered that technology was simply "applied science" and that to fund basic science was to reap technological results in due time. An articulation of this philosophy could be found explicitly in Vannevar Bush's treatise on postwar science policy, Science—The Endless Frontier: "New products, new industries, and more jobs require continuous additions to knowledge of the laws of nature... This essential new knowledge can be obtained only through basic scientific research." In the late-1960s, however, this view came under direct attack, leading towards initiatives to fund science for specific tasks (initiatives resisted by the scientific community). The issue remains contentious—though most analysts resist the model that technology simply is a result of scientific research.

History

Paleolithic (2.5 million – 10,000 BC)

The use of tools by early humans was partly a process of discovery, partly of evolution. Early humans evolved from a race of foraging hominids which were already bipedal, with a brain mass approximately one third that of modern humans. Tool use remained relatively unchanged for most of early human history, but approximately 50,000 years ago, a complex set of behaviors and tool use emerged, believed by many archaeologists to be connected to the emergence of fully modern language.

Human ancestors have been using stone and other tools since long before the emergence of Homo sapiens approximately 200,000 years ago. The earliest methods of stone tool making, known as the Oldowan "industry", date back to at least 2.3 million years ago, with the earliest direct evidence of tool usage found in Ethiopia within the Great Rift Valley, dating back to 2.5 million years ago. This era of stone tool use is called the Paleolithic, or "Old stone age", and spans all of human history up to the development of agriculture approximately 12,000 years ago.

To make a stone tool, a "core" of hard stone with specific flaking properties (such as flint) was struck with a hammerstone. This flaking produced a sharp edge on the core stone as well as on the flakes, either of which could be used as tools, primarily in the form of choppers or scrapers. These tools greatly aided the early humans in their hunter-gatherer lifestyle to perform a variety of tasks including butchering carcasses (and breaking bones to get at the marrow); chopping wood; cracking open nuts; skinning an animal for its hide; and even forming other tools out of softer materials such as bone and wood.

The earliest stone tools were crude, being little more than a fractured rock. In the Acheulian era, beginning approximately 1.65 million years ago, methods of working these stone into specific shapes, such as hand axes emerged. The Middle Paleolithic, approximately 300,000 years ago, saw the introduction of the prepared-core technique, where multiple blades could be rapidly formed from a single core stone. The Upper Paleolithic, beginning approximately 40,000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely.

Fire

The discovery and utilization of fire, a simple energy source with many profound uses, was a turning point in the technological evolution of humankind. The exact date of its discovery is not known; evidence of burnt animal bones at the Cradle of Humankind suggests that the domestication of fire occurred before 1,000,000 BC; scholarly consensus indicates that Homo erectus had controlled fire by between 500,000 BC and 400,000 BC. Fire, fueled with wood and charcoal, allowed early humans to cook their food to increase its digestibility, improving its nutrient value and broadening the number of foods that could be eaten.

Clothing and shelter

Other technological advances made during the Paleolithic era were clothing and shelter; the adoption of both technologies cannot be dated exactly, but they were a key to humanity's progress. As the Paleolithic era progressed, dwellings became more sophisticated and more elaborate; as early as 380,000 BC, humans were constructing temporary wood huts. Clothing, adapted from the fur and hides of hunted animals, helped humanity expand into colder regions; humans began to migrate out of Africa by 200,000 BC and into other continents, such as Eurasia.

Neolithic through Classical Antiquity (10,000BC – 300AD)

An array of Neolithic artifacts, including bracelets, axe heads, chisels, and polishing tools.

Man's technological ascent began in earnest in what is known as the Neolithic period ("New stone age"). The invention of polished stone axes was a major advance because it allowed forest clearance on a large scale to create farms. The discovery of agriculture allowed for the feeding of larger populations, and the transition to a sedentist lifestyle increased the number of children that could be simultaneously raised, as young children no longer needed to be carried, as was the case with the nomadic lifestyle. Additionally, children could contribute labor to the raising of crops more readily than they could to the hunter-gatherer lifestyle.

With this increase in population and availability of labor came an increase in labor specialization. What triggered the progression from early Neolithic villages to the first cities, such as Uruk, and the first civilizations, such as Sumer, is not specifically known; however, the emergence of increasingly hierarchical social structures, the specialization of labor, trade and war amongst adjacent cultures, and the need for collective action to overcome environmental challenges, such as the building of dikes and reservoirs, are all thought to have played a role.

Metal tools

Continuing improvements led to the furnace and bellows and provided the ability to smelt and forge native metals (naturally occurring in relatively pure form). Gold, copper, silver, and lead, were such early metals. The advantages of copper tools over stone, bone, and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of Neolithic times (about 8000 BC). Native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. Eventually, the working of metals led to the discovery of alloys such as bronze and brass (about 4000 BC). The first uses of iron alloys such as steel dates to around 1400 BC.

Energy and Transport

Meanwhile, humans were learning to harness other forms of energy. The earliest known use of wind power is the sailboat. The earliest record of a ship under sail is shown on an Egyptian pot dating back to 3200 BC. From prehistoric times, Egyptians probably used "the power of the Nile" annual floods to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and 'catch' basins. Similarly, the early peoples of Mesopotamia, the Sumerians, learned to use the Tigris and Euphrates rivers for much the same purposes. But more extensive use of wind and water (and even human) power required another invention.

According to archaeologists, the wheel was invented around 4000 B.C. The wheel was probably independently invented in Mesopotamia (in present-day Iraq) as well. Estimates on when this may have occurred range from 5500 to 3000 B.C., with most experts putting it closer to 4000 B.C. The oldest artifacts with drawings that depict wheeled carts date from about 3000 B.C.; however, the wheel may have been in use for millennia before these drawings were made. There is also evidence from the same period of time that wheels were used for the production of pottery. (Note that the original potter's wheel was probably not a wheel, but rather an irregularly shaped slab of flat wood with a small hollowed or pierced area near the center and mounted on a peg driven into the earth. It would have been rotated by repeated tugs by the potter or his assistant.) More recently, the oldest-known wooden wheel in the world was found in the Ljubljana marshes of Slovenia.

The invention of the wheel revolutionized activities as disparate as transportation, war, and the production of pottery (for which it may have been first used). It didn't take long to discover that wheeled wagons could be used to carry heavy loads and fast (rotary) potters' wheels enabled early mass production of pottery. But it was the use of the wheel as a transformer of energy (through water wheels, windmills, and even treadmills) that revolutionized the application of nonhuman power sources.

Medieval and Modern history (300 AD —)

Innovations continued through the Middle Ages with new innovations such as silk, the horse collar and horseshoes in the first few hundred years after the fall of the Roman Empire. Medieval technology saw the use of simple machines (such as the lever, the screw, and the pulley) being combined to form more complicated tools, such as the wheelbarrow, windmills and clocks. The Renaissance brought forth many of these innovations, including the printing press (which facilitated the greater communication of knowledge), and technology became increasingly associated with science, beginning a cycle of mutual advancement. The advancements in technology in this era allowed a more steady supply of food, followed by the wider availability of consumer goods.

Starting in the United Kingdom in the 18th century, the Industrial Revolution was a period of great technological discovery, particularly in the areas of agriculture, manufacturing mining, metallurgy and transport, driven by the discovery of steam power. Technology later took another step with the harnessing of electricity to create such innovations as the electric motor, light bulb and countless others. Scientific advancement and the discovery of new concepts later allowed for powered flight, and advancements in medicine, chemistry, physics and engineering. The rise in technology has led to the construction of skyscrapers and large cities whose inhabitants rely on automobiles or other powered transit for transportation. Communication was also improved with the invention of the telegraph, telephone, radio and television.

The second half of the 20th century brought a host of new innovations. In physics, the discovery of nuclear fission has led to both nuclear weapons and nuclear energy. Computers were also invented later miniaturized with transistors and integrated circuits, with the creation of the Internet resulting after. Humans have also been able to explore space with satellites (later used for telecommunication) and in manned missions going all the way to the moon. In medicine, this era brought innovations such as open-heart surgery and later stem cell therapy along with new medications and treatments. Complex manufacturing and construction techniques and organizations are needed to construct and maintain these new technologies, and entire industries have arisen to support and develop succeeding generations of increasingly more complex tools. Modern technology increasingly relies on training and education — their designers, builders, maintainers, and often users often require sophisticated general and specific training. Moreover, these technologies have become so complex that entire fields have been created to support them, including engineering, medicine, and computer science, and other fields have been made more complex, such as construction, transportation and architecture.

Technicism

Generally, technicism is an over reliance or overconfidence in technology as a benefactor of society. Taken to extreme, technicism is the belief that humanity will ultimately be able to control the entirety of existence using technology. In other words, human beings will someday be able to master all problems and possibly even control the future using technology. Some, such as Stephen V. Monsma, connect these ideas to the abdication of religion as a higher moral authority.

Optimism

Optimistic assumptions are made by proponents of ideologies such as transhumanism and singularitarianism, which view technological development as generally having beneficial effects for the society and the human condition. In these ideologies, technological development is morally good. Some critics see these ideologies as examples of scientism and techno-utopianism and fear the notion of human enhancement and technological singularity which they support. Some have described Karl Marx as a techno-optimist.

Pessimism

On the somewhat pessimistic side are certain philosophers like Herbert Marcuse and John Zerzan, who believe that technological societies are inherently flawed a priori. They suggest that the result of such a society is to become evermore technological at the cost of freedom and psychological health.

Many, such as the Luddites and prominent philosopher Martin Heidegger, hold serious reservations, although not a priori flawed reservations, about technology. Heidegger presents such a view in "The Question Concerning Technology": "Thus we shall never experience our relationship to the essence of technology so long as we merely conceive and push forward the technological, put up with it, or evade it. Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it."

Some of the most poignant criticisms of technology are found in what are now considered to be dystopian literary classics, for example Aldous Huxley's Brave New World and other writings, Anthony Burgess's A Clockwork Orange, and George Orwell's Nineteen Eighty-Four. And, in Faust by Goethe, Faust's selling his soul to the devil in return for power over the physical world, is also often interpreted as a metaphor for the adoption of industrial technology.

An overtly anti-technological treatise is Industrial Society and Its Future, written by Theodore Kaczynski (aka The Unabomber) and printed in several major newspapers (and later books) as part of an effort to end his bombing campaign of the techno-industrial infrastructure.

Appropriate technology

The notion of appropriate technology, however, was developed in the 20th century (e.g., see the work of Jacques Ellul) to describe situations where it was not desirable to use very new technologies or those that required access to some centralized infrastructure or parts or skills imported from elsewhere. The eco-village movement emerged in part due to this concern.

Other animal species

This adult gorilla uses a branch as a walking stick to gauge the water's depth; an example of technology usage by primates.

The use of basic technology is also a feature of other animal species apart from humans. These include primates such as chimpanzees, some dolphin communities, and crows. Considering a more generic perspective of technology as ethology of active environmental conditioning and control, we can also refer to animal examples such as beavers and their dams, or bees and their honeycombs.

The ability to make and use tools was once considered a defining characteristic of the genus Homo. However, the discovery of tool construction among chimpanzees and related primates has discarded the notion of the use of technology as unique to humans. For example, researchers have observed wild chimpanzees utilising tools for foraging: some of the tools used include leaf sponges, termite fishing probes, pestles and levers. West African chimpanzees also use stone hammers and anvils for cracking nuts, as do capuchin monkeys of Boa Vista, Brazil.

Future techonology

Theories of technology often attempt to predict the future of technology based on the high technology and science of the time. This process is difficult if not impossible. Referring to the sheer velocity of technological innovation, Arthur C. Clarke said "Any sufficiently advanced technology is indistinguishable from magic."

http://en.wikipedia.org/wiki/Technology

 

 

Modern technology is changing the way our brains work, says neuroscientist

By SUSAN GREENFIELD

Human identity, the idea that defines each and every one of us, could be facing an unprecedented crisis.

It is a crisis that would threaten long-held notions of who we are, what we do and how we behave.

It goes right to the heart - or the head - of us all. This crisis could reshape how we interact with each other, alter what makes us happy, and modify our capacity for reaching our full potential as individuals.

And it's caused by one simple fact: the human brain, that most sensitive of organs, is under threat from the modern world.

Unless we wake up to the damage that the gadget-filled, pharmaceutically-enhanced 21st century is doing to our brains, we could be sleepwalking towards a future in which neuro-chip technology blurs the line between living and non-living machines, and between our bodies and the outside world.

It would be a world where such devices could enhance our muscle power, or our senses, beyond the norm, and where we all take a daily cocktail of drugs to control our moods and performance.

Already, an electronic chip is being developed that could allow a paralysed patient to move a robotic limb just by thinking about it. As for drug manipulated moods, they're already with us - although so far only to a medically prescribed extent.

Increasing numbers of people already take Prozac for depression, Paxil as an antidote for shyness, and give Ritalin to children to improve their concentration. But what if there were still more pills to enhance or "correct" a range of other specific mental functions?

What would such aspirations to be "perfect" or "better" do to our notions of identity, and what would it do to those who could not get their hands on the pills? Would some finally have become more equal than others, as George Orwell always feared?

Of course, there are benefits from technical progress - but there are great dangers as well, and I believe that we are seeing some of those today.

I'm a neuroscientist and my day-to-day research at Oxford University strives for an ever greater understanding - and therefore maybe, one day, a cure - for Alzheimer's disease.

But one vital fact I have learnt is that the brain is not the unchanging organ that we might imagine. It not only goes on developing, changing and, in some tragic cases, eventually deteriorating with age, it is also substantially shaped by what we do to it and by the experience of daily life. When I say "shaped", I'm not talking figuratively or metaphorically; I'm talking literally. At a microcellular level, the infinitely complex network of nerve cells that make up the constituent parts of the brain actually change in response to certain experiences and stimuli.

The brain, in other words, is malleable - not just in early childhood but right up to early adulthood, and, in certain instances, beyond. The surrounding environment has a huge impact both on the way our brains develop and how that brain is transformed into a unique human mind.

Of course, there's nothing new about that: human brains have been changing, adapting and developing in response to outside stimuli for centuries.

What prompted me to write my book is that the pace of change in the outside environment and in the development of new technologies has increased dramatically. This will affect our brains over the next 100 years in ways we might never have imagined.

Our brains are under the influence of an ever- expanding world of new technology: multichannel television, video games, MP3 players, the internet, wireless networks, Bluetooth links - the list goes on and on.

But our modern brains are also having to adapt to other 21st century intrusions, some of which, such as prescribed drugs like Ritalin and Prozac, are supposed to be of benefit, and some of which, such as widelyavailable illegal drugs like cannabis and heroin, are not.

Electronic devices and pharmaceutical drugs all have an impact on the micro- cellular structure and complex biochemistry of our brains. And that, in turn, affects our personality, our behaviour and our characteristics. In short, the modern world could well be altering our human identity.

Three hundred years ago, our notions of human identity were vastly simpler: we were defined by the family we were born into and our position within that family. Social advancement was nigh on impossible and the concept of "individuality" took a back seat.

That only arrived with the Industrial Revolution, which for the first time offered rewards for initiative, ingenuity and ambition. Suddenly, people had their own life stories - ones which could be shaped by their own thoughts and actions. For the first time, individuals had a real sense of self.

But with our brains now under such widespread attack from the modern world, there's a danger that that cherished sense of self could be diminished or even lost.

Anyone who doubts the malleability of the adult brain should consider a startling piece of research conducted at Harvard Medical School. There, a group of adult volunteers, none of whom could previously play the piano, were split into three groups.

The first group were taken into a room with a piano and given intensive piano practise for five days. The second group were taken into an identical room with an identical piano - but had nothing to do with the instrument at all.

And the third group were taken into an identical room with an identical piano and were then told that for the next five days they had to just imagine they were practising piano exercises.

The resultant brain scans were extraordinary. Not surprisingly, the brains of those who simply sat in the same room as the piano hadn't changed at all.

Equally unsurprising was the fact that those who had performed the piano exercises saw marked structural changes in the area of the brain associated with finger movement.

But what was truly astonishing was that the group who had merely imagined doing the piano exercises saw changes in brain structure that were almost as pronounced as those that had actually had lessons. "The power of imagination" is not a metaphor, it seems; it's real, and has a physical basis in your brain.

Alas, no neuroscientist can explain how the sort of changes that the Harvard experimenters reported at the micro-cellular level translate into changes in character, personality or behaviour. But we don't need to know that to realise that changes in brain structure and our higher thoughts and feelings are incontrovertibly linked.

What worries me is that if something as innocuous as imagining a piano lesson can bring about a visible physical change in brain structure, and therefore some presumably minor change in the way the aspiring player performs, what changes might long stints playing violent computer games bring about? That eternal teenage protest of 'it's only a game, Mum' certainly begins to ring alarmingly hollow.

Already, it's pretty clear that the screen-based, two dimensional world that so many teenagers - and a growing number of adults - choose to inhabit is producing changes in behaviour. Attention spans are shorter, personal communication skills are reduced and there's a marked reduction in the ability to think abstractly.

This games-driven generation interpret the world through screen-shaped eyes. It's almost as if something hasn't really happened until it's been posted on Facebook, Bebo or YouTube.

Add that to the huge amount of personal information now stored on the internet - births, marriages, telephone numbers, credit ratings, holiday pictures - and it's sometimes difficult to know where the boundaries of our individuality actually lie. Only one thing is certain: those boundaries are weakening.

And they could weaken further still if, and when, neurochip technology becomes more widely available. These tiny devices will take advantage of the discovery that nerve cells and silicon chips can happily co-exist, allowing an interface between the electronic world and the human body. One of my colleagues recently suggested that someone could be fitted with a cochlear implant (devices that convert sound waves into electronic impulses and enable the deaf to hear) and a skull-mounted micro- chip that converts brain waves into words (a prototype is under research).

Then, if both devices were connected to a wireless network, we really would have arrived at the point which science fiction writers have been getting excited about for years. Mind reading!

He was joking, but for how long the gag remains funny is far from clear.

Today's technology is already producing a marked shift in the way we think and behave, particularly among the young.










Последнее изменение этой страницы: 2018-05-29; просмотров: 148.

stydopedya.ru не претендует на авторское право материалов, которые вылажены, но предоставляет бесплатный доступ к ним. В случае нарушения авторского права или персональных данных напишите сюда...