Exploring Languages

with Interpreters

and Functional Programming

Chapter 20

**H. Conrad Cunningham**

**04 April 2022**

Copyright (C) 2016, 2017, 2018, 2022, H. Conrad Cunningham

Professor of Computer and Information Science

University of Mississippi

214 Weir Hall

P.O. Box 1848

University, MS 38677

(662) 915-7396 (dept. office)

Professor of Computer and Information Science

University of Mississippi

214 Weir Hall

P.O. Box 1848

University, MS 38677

(662) 915-7396 (dept. office)

**Browser Advisory:** The HTML version of this textbook
requires a browser that supports the display of MathML. A good choice as
of April 2022 is a recent version of Firefox from Mozilla.

This Chapter is incomplete.

TODO: - Add Chapter Introduction - Give additional and improved examples. - Add What Next?, Chapter Source Code, and Exercises sections

I approach computing science with the following philosophy:

Programming is the essence of computing science.

Problem solving is the essence of programming.

Here I consider programming as the process of analyzing a problem and formulating a solution suitable for execution on a computer. The solution should be correct, elegant, efficient, and robust. It should be expressed in a manner that is understandable, maintainable, and reusable. The solution should balance generality and specificity, abstraction and concreteness.

In my view, programming is far more than just coding. It subsumes the concerns of algorithms, data structures, and software engineering. It uses programming languages and software development tools. It uses the intellectual tools of mathematics, logic, linguistics, and computing science theory. Etc.

The mathematician George Polya (1887–1985), a Professor of
Mathematics at Stanford University, said the following in the preface to
his book *Mathematical Discovery: On Understanding, Learning and
Teaching Problem Solving* [4].

Solving a problem means finding a way out of a difficulty, a way around an obstacle, attaining an aim which was not immediately attainable. Solving problems is the specific achievement of intelligence, and intelligence is the specific gift of mankind: solving problems can be regarded as the most characteristically human activity. …

Solving problems is a practical art, like swimming, or skiing, or playing the piano: you learn it only by imitation and practice. … if you wish to learn swimming you have to go into the water, and if you wish to become a problem solver you have to solve problems.

If you wish to derive the most profit from your effort, look out for such features of a problem at hand as may be useful in handling the problems to come. A solution that you have obtained by your own effort or one that you have read or heard, but have followed with real interest and insight, may become a

patternfor you, a model that you can imitate with advantage in solving similar problems. …Our knowledge about any subject consists of

informationandknow-how. If you have genuinebonafideexperience of mathematical work on any level, elementary or advanced, there will be no doubt in your mind that, in mathematics, know-how is much more important than mere possession of information. …What is know-how in mathematics? The ability to solve problems—not merely routine problems but problems requiring some degree of independence, judgment, originality, creativity. Therefore, the first and foremost duty … in teaching mathematics is to emphasize

methodical work in problem solving.

What Polya says for mathematics holds just as much for computing science.

In the book *How to Solve It* [3], Polya states four phases of problem
solving. These steps are important for programming as well.

Understand the problem.

Devise a plan.

Carry out the plan, checking each step.

Reexamine and reconsider the solution. (And, of course, reexamine the understanding of the problem, the plan, and the way the plan was carried out.)

There are many problem-solving strategies applicable to programming in general and functional programming in particular. We have seen some of these in the earlier chapters and will see others in later chapters. In this section, we highlight some of the general techniques.

The first strategy is to *solve a more general problem first*.
That is, we solve a “harder” problem than the specific problem at hand,
then use the solution of the “harder” problem to get the specific
solution desired.

Sometimes a solution of the more general problem is actually easier to find because the problem is simpler to state, more symmetrical, or less obscured by special conditions. The general solution can often be used to solve other related problems.

Often the solution of the more general problem can actually lead to a more efficient solution of the specific problem.

We have already seen one example of this technique: finding the first occurrence of an item in a list in Chapter 18.

First, we devised a program to find all occurrences in a list. Then we selected the first occurrence from the set of all occurrences. (Lazy evaluation of Haskell programs means that this use of a more general solution differs very little in efficiency from a specialized version.)

We have also seen several cases where we have generalized problems by
adding one or more *accumulating parameters*. These “harder”
problems can lead to more efficient tail recursive solutions.

For example, consider the tail recursive Fibonacci program we developed in Chapter 9. We added two extra arguments to the function.

```
fib2 :: Int -> Int
| n >= 0 = fibIter n 0 1
fib2 n where
0 p q = p
fibIter | m > 0 = fibIter (m-1) q (p+q) fibIter m p q
```

Another approach is to use the *tupling* technique. Instead of
adding extra arguments, we add extra results.

For example, in the Fibonacci program `fastfib`

below, we
compute `(fib n, fib (n+1))`

instead of just
`fib n`

. This is a harder problem, but it actually gives us
more information to work with and, hence, provides more opportunity for
optimization. (We formally derive this solution in Chapter
26.)

```
fastfib :: Int -> Int
| n >= 0 = fst (twofib n)
fastfib n
twofib :: Int -> (Int,Int)
0 = (0,1)
twofib = (b,a+b)
twofib n where (a,b) = twofib (n-1)
```

The second strategy is to *solve a simpler problem first*.
After solving the simpler problem, we then adapt or extend the solution
to solve the original problem.

Often the mass of details in a problem description makes seeing a solution difficult. In the previous technique we made the problem easier by finding a more general problem to solve. In this technique, we move in the other direction: we find a more specific problem that is similar and solve it.

At worst, by solving the simpler problem we should get a better understanding of the problem we really want to solve. The more familiar we are with a problem, the more information we have about it, and, hence, the more likely we will be able to solve it.

At best, by solving the simpler problem we will find a solution that can be easily extended to build a solution to the original problem.

Consider a program to convert a positive integer of up to six digits
to a string consisting of the English words for that number. For
example, `369027`

yields the string:

` three hundred and sixty-nine thousand and twenty-seven`

To deal with the complexity of this problem, we can work as follows:

- Solve the problem of converting a two-digit number to words. (The single digit numbers and numbers in teens are special cases.)
- Then extend the two-digit solution to three digits. (This can basically use the solution to part “a” twice.)
- Then extend three-digit solution to to six digits. (This can basically use the solution to part “b” twice.)

See Section 4.1 of the classic Bird and Wadler textbook [1] for the details of this problem and a solution.

TODO: May want to create some code for this problem rather than just refer to an old textbook.

The process of generalizing first-order functions into higher-order
functions is another example of this “solve a simpler problem first”
strategy. Recall how we motivated the development of the higher-order
library functions such as `map`

, `filter`

, and
`foldr`

in
Chapter 15. Also consider the
function generalization approach used in the cosequential processing
case study in Chapter 19.

The third strategy is to *reuse an off-the-shelf solutions to a
standard subproblem*.

We have been doing this all during this semester, especially since we began began studying polymorphism and higher-order functions.

The basic idea is to identify standard patterns of computation (e.g.,
standard Prelude functions such as `length`

,
`take`

{.haskell, `zip`

{.haskell,
`map`

{.haskell, `filter`

{.haskell,
`foldr`

{.haskell) that will solve some aspects of the problem
and then combine (e.g., using function composition) these standard
patterns with your own specialized functions to construct a solution to
the problem.

We have seen several examples of this technique in this textbook and its exercises.

See section 4.2 of the classic Bird and Wadler textbook [1] for a case study that develops a package of functions to do arithmetic on variable length integers. The functions take advantage of several of the standard Prelude functions.

The fifth strategy is to *separate concerns*. That is, we
partition the problem into logically separate problems, solve each
problem separately, then combine the solutions to the subproblems to
construct a solution to the problem at hand.

As we have seen in the above strategies, when a problem is complex and difficult to attack directly, we search for simpler, but related, problems to solve, then build a solution to the complex problem from the simpler problems.

We have seen examples of this approach in earlier chapters and homework assignments. We separated concerns when we used stepwise refinement to develop a square root function, data abstraction in the rational number case study, and function pipelines.

Consider the development of a program to print a calendar for any year in various formats. We can approach this problem by first separating it into two independent subproblems:

- building a calendar
- formatting the output

After solving each of these simpler problems, the more complex problem can be solved easily by combining the two solutions. (See Section 4.5 of the classic Bird and Wadler textbook [1] for the details of this problem and a solution.)

The sixth strategy is *divide and conquer*. This is a special
case of the “solve a simpler problem first” strategy. In this technique,
we must divide the problem into subproblems that are the same as the
original problem except that the size of the input is smaller.

This process of division continues recursively until we get a problem that can be solved trivially, then we combined we reverse the process by combining the solutions to subproblems to form solutions to larger problems.

Examples of divide and conquer from earlier chapters include the
logarithmic exponentiation function `expt3`

in Chapter
9 and the merge sort
function `msort`

in Chapter
17.

Another common example of the divide and conquer approach is binary search. (See Section 6.4.3 of the classic Bird and Wadler textbook [1].)

Chapter 17 examines divide and conquer algorithms in terms of a higher order function that captures the pattern.

There are, of course, other strategies that can be used to approach problem solving.

TODO

TODO

TODO

In 2016 and 2017, I adapted and revised my previous notes to form Chapter 7, More List Processing and Problem Solving, in the 2017 version of this textbook. In particular, I drew the information on Problem Solving from:

chapter 10 of my

*Notes on Functional Programming with Haskell*for discussion of problem-solving techniques in section 7.4Chapter 10 drew on chapters 4 and 6 of Bird [1], chapter 4 of [5], and two of Polya’s books [3,4].

part of chapter 12 of

*Notes on Functional Programming with Haskell*for discussion of the tupling technique incorporated into subsection 7.4.2.1The tupling discussion originally drew from Bird [1] and Hoogerwoord [2].

In Summer 2018, I divided the previous More List Processing and
Problem Solving chapter back into two chapters in the 2018 version of
the textbook, now titled *Exploring Languages with Interpreters and
Functional Programming*. Previous sections 7.2-7.3 became the basis
for new Chapter 18, More List Processing, and previous section 7.4
(essentially the two items above) became the basis for new Chapter 20
(this chapter), Problem Solving.

I retired from the full-time faculty in May 2019. As one of my post-retirement projects, I am continuing work on this textbook. In January 2022, I began refining the existing content, integrating additional separately developed materials, reformatting the document (e.g., using CSS), constructing a bibliography (e.g., using citeproc), and improving the build workflow and use of Pandoc.

I maintain this chapter as text in Pandoc’s dialect of Markdown using embedded LaTeX markup for the mathematical formulas and then translate the document to HTML, PDF, and other forms as needed.

Problem solving, Polya, information, know-how, bonafide experience, problem-solving strategies, solve a more general (harder) problem first, accumulating parameters, tupling, solve a simpler problem first, reuse an off-the-shelf solution, higher-order functions, stepwise refinement, data abstraction, solve a related problem, separate concerns, divide and conquer.

[1]

Richard Bird and Philip Wadler. 1988.
*Introduction to functional programming* (First ed.). Prentice
Hall, Englewood Cliffs, New Jersey, USA.

[2]

Robert R. Hoogerwoord. 1989. The design of
functional programs: A calculational approach. PhD thesis. Eindhoven
Technical University, Eindhoven, The Netherlands.

[3]

George Polya. 1957. *How to solve it: A new
aspect of mathematical method* (Second ed.). Princeton Unversity
Press.

[4]

George Polya. 1981. *Mathematical discovery:
On understanding, learning, and teaching problem solving* (Combined
ed.). Wiley, Hoboken, New Jersey, USA.

[5]

Simon Thompson. 2011. *Haskell: The craft of
programming* (Third ed.). Addison-Wesley, Boston, Massachusetts,
USA.