CSci 555: Functional Programming
Spring 2016
Effect of Computing Hardware Evolution on Programming Languages
Note: These brief notes are expanded from the handwritten notes
used by the instructor for the lecture/discussion on this topic in
CSci 450/503 (Organization of Programming Languages) on 27 August
2014.
When were the first "modern" computers developed? That is,
programmable electronic computers.
Although the mathematical roots of computing go back more than a
thousand years, it is only with the invention of the programmable
electronic digital computer during the World War II era of the 1930s
and 1940s that modern computing began to take shape.
One of the first computers was the ENIAC (Electronic Numerical
Integrator and Computer), developed in the mid-1940s at the University
of Pennsylvania. When construction was completed in 1946, the ENIAC
cost about $500,000. In today's terms, that is more than $5,000,000. It
weighed 30 tons, occupied as much space as a small house, and consumed
160 kilowatts of electric power.
The ENIAC and most other computers of that era were designed for
military purposes, such as calculating firing tables for artillery or
breaking codes. As a result, many observers viewed the market for such
devices to be quite small. The observers were wrong!
Electronics technology has improved greatly in 70 years. Today, a
computer with the capacity of the ENIAC would be smaller than a coin
from our pockets, would consume little power, and cost just a few
dollars on the mass market.
How have computer systems and their use evolved over the past
70 years?
- Contemporary processors are much smaller and faster. They use
much less power, cost much less money, and operate much more reliably.
- Contemporary "main" memories are much larger in capacity, smaller
in physical size, and faster in access speed. They also use much
less power, cost much less money, and operate much more reliably.
- The number of processors per machine has increased from one
to many. First, channels and other co-processors were added, then
multiple CPUs. Today, computer chips for common desktop and mobile
applications have several processors--cores--on each chip, plus
specialized processors such as graphics processing units (GPUs) for
data manipulation and parallel computation. This trend toward
multiprocessors will likely continue given that physics dictates
limits on how small and fast we can make computer processors; to
continue to increase in power means increasing parallelism.
- Contemporary external storage devices are much larger in
capacity, smaller in size, faster in access time,
and less expensive to construct.
- The number of computers available per user has increased from
much less than one to many more than one.
- Early systems were often locked into rooms, with few or no
direct connections to the external world and just a few kinds of
input/output devices. Contemporary systems may be on the user's
desktop or in the user's backpack, be connected to the internet, and
have many kinds of input/output devises.
- The range of applications has increased from a few specialized
applications (e.g., code-breaking, artillery firing tables) to almost
all human activities.
- The cost of the human staff to program, operate, and support
computer systems has probably increased somewhat (in constant
dollars).
How have these changes affected programming practice?
- In the early days of computing, computers were very expensive
and the cost of the human workers to use them relatively less.
Today, the opposite holds. So we need to maximize human productivity.
- In the early days of computing, the slow processor speeds
and small memory sizes meant that programmers had to control these
precious resources to be able to carry out most routine
computations. Although we still need to use efficient algorithms and
data structures and use good coding practices, programmers can now
bring large amounts of computing capacity to bear on most
problems. We can use more computing resources to improve
productivity to program development and maintenance. The size of
the problems we can solve computationally has increased.
- In the early days of computing, multiple applications and
users usually had to share one computer. Today, we can often apply
many processors for each user and application if needed.
Increasingly, applications must be able to use multiple processors
effectively.
- Security on early systems meant keeping the computers in locked
rooms and restricting physical access to those rooms. In
contemporary networked systems with diverse applications, security
has become a much more difficult issue with many aspects.
- etc.
The first higher-level programming languages began to appear in the
1950s. IBM released the first compiler for a programming language in
1957--for the scientific programming language Fortran. Although Fortran
has evolved considerably during the past 60 years, it is still in use
today.
How have the above changes affected programming language design
and implementation over the past 60 years?
- Contemporary programming languages often use automatic memory
allocation and deallocation (e.g., garbage collection) to manage a
program's memory. Although programs in these languages may use more
memory and processor cycles than hand-optimized programs, they can
increase programmer productivity and the security and reliability of
the programs. Think Java, C#, and Python versus C and C++.
- Contemporary programming languages are often implemented using
an interpreter instead of a compiler that translates the program to
the processor's machine code--or be implemented a compiler to a
virtual machine instruction set (which is itself interpreted on the
host processor). Again they use more processor and memory resources to
increase programmer productivity and the security and reliability of
the programs. Think Java, C#, and Python versus C and C++.
- Contemporary programming languages should make the capabilities
of contemporary multicore systems conveniently and safely available
to programs and applications. To fail to do so limits the
performance and scalability of the application. Think Erlang, Scala,
and Clojure versus C, C++, and Java.
- Contemporary programming languages increasingly incorporate declarative
features (higher order functions, recursion, immutable data
structures, generators, etc.). These features offer the potential of
increasing programming productivity, increasing the security and
reliability of programs, and more conveniently and safely providing
access to multicore processor capabilities. Think Scala, Clojure,
and Java 8 and beyond versus C, C++, and older Java.
- etc.
As we study programming and programming languages in this
course--in particular functional and logic languages--we need to keep
the above nature of the contemporary programming scene in mind.
UP to CSci 555 Lecture Notes root document?
Copyright © 2016, H. Conrad Cunningham
Last modified: Thu Apr 21 07:43:43 CDT 2016