First Generation (1940-1956): The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, room-filling machines. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First-generation computers relied on machine language, the lowest-level programming language understood by computers, to perform operations, and they could only solve one problem at a time.
Second Generation (1956-1963): The transistor was invented in 1947 but did not see widespread use in computers until the late 1950s. Transistors replaced vacuum tubes and ushered in the second generation of computers. Using transistors, engineers built smaller, cheaper, more reliable and energy-efficient computers. Second-generation computers still relied on punched cards for input and printouts for output. However, they could do more than previous computers. They could store more information on magnetic core memory. They also executed programs faster than their predecessors.
Third Generation (1964-1971): The integrated circuit was invented in 1958, but did not become widely used in computers until the mid-1960s. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors. Integrated circuits allowed computers to contain more circuitry on a single chip, which led to more compact and powerful machines.
Fourth Generation (1971-Present): The microprocessor was invented in 1971 and is still in use today. Microprocessors made possible the development of the personal computer, or PC. With a microprocessor, all the components of a computer—the central processing unit, memory, input/output controls, and arithmetic/logic unit—could be contained on a single chip, which led to further decreases in size and cost.