Exploring Languages

with Interpreters

and Functional Programming

Chapter 26

**H. Conrad Cunningham**

**18 March 2022**

Copyright (C) 2018, 2022, H. Conrad Cunningham

Professor of Computer and Information Science

University of Mississippi

214 Weir Hall

P.O. Box 1848

University, MS 38677

(662) 915-7396 (dept. office)

Professor of Computer and Information Science

University of Mississippi

214 Weir Hall

P.O. Box 1848

University, MS 38677

(662) 915-7396 (dept. office)

**Browser Advisory:** The HTML version of this textbook
requires a browser that supports the display of MathML. A good choice as
of March 2022 is a recent version of Firefox from Mozilla.

Chapter 25 illustrated how to state and prove Haskell “laws” about already defined functions.

This chapter (26) illustrates how to use similar reasoning methods to synthesize (i.e., derive or calculate) function definitions from their specifications.

Chapter 27 applies these program synthesis techniques to a larger set of examples on text processing.

This chapter deals with program synthesis.

In the *proof* of a property, we take an existing program and
then demonstrate that it satisfies some property.

In the *synthesis* of a program, we take a property called a
*specification* and then synthesize a program that satisfies it
[3]. (Program
synthesis is called program *derivation* in other contexts, such
as in the Gries textbook [13] and my *Notes on Program Semantics
and Derivation* [8].)

Both proof and synthesis require essentially the same reasoning. Often a proof can be turned into a synthesis by simply reversing a few of the steps, and vice versa.

This section is based on Bird and Wadler [3:5.4.5,3:5.5] and Hoogerwoord [15:4.5].

A (second-order) Fibonacci sequence is the sequence in which the first two elements are 0 and 1 and each successive element is the sum of the two immediately preceding elements:

` 0, 1, 1, 2, 3, 5, 8, 13, ...`

As we have seen in Chapter
9, we can take the above
informal description and define a function to compute the `n`

th element of the Fibonacci sequence.
The definition is straightforward. Unfortunately, this algorithm is
quite inefficient, O(`fib n`

).

```
fib :: Int -> Int
0 = 0
fib 1 = 1
fib | n >= 2 = fib (n-1) + fib (n-2) fib n
```

In Chapter 9, we also developed a more efficient, but less straightforward, version by using two accumulating parameters. This definition seemed to be “pulled out of thin air”. Can we synthesize a definition that uses the more efficient algorithm from the simpler definition above?

Yes, but we use a slightly different approach than we did before. We
can improve the performance of the Fibonacci computation by using a
technique called *tupling* [15] as we saw in Chapter
20.

The tupling technique can be applied to a set of functions with the same domain and the same recursive pattern. First, we define a new function whose value is a tuple, the components of which are the values of the original functions. Then, we proceed to calculate a recursive definition for the new function.

This technique is similar to the technique of adding accumulating parameters to define a new function.

Given the definition of `fib`

above, we begin with the specification [3]:

`= (fib n, fib (n+1)) twofib n `

and synthesize a recursive definition by using induction on the
natural number `n`

.

**Base case n = 0:**

`0 twofib `

$=$
`{`

specification `}`

`0, fib (0+1)) (fib `

$=$
`{`

arithmetic, `fib.1, fib.2`

`}`

`0,1) (`

This gives us a definition for the base case.

**Inductive case n = m+1:**

Given that there is a definition for `twofib m`

that satisfies the
specification

`= (fib m, fib (m+1)) twofib m `

calculate a definition for `twofib (m+1)`

that satisfies the specification.

`+1) twofib (m`

$=$
`{`

specification `}`

`+1), fib ((m+1)+1)) (fib (m`

$=$
`{`

arithmetic, `fib.3`

`}`

`+1), fib m + fib (m+1)) (fib (m`

$=$
`{`

abstraction `}`

```
+b)
(b,awhere (a,b) = (fib m, fib (m+1))
```

$=$
`{`

induction hypothesis `}`

```
+b)
(b,awhere (a,b) = twofib m
```

This gives us a definition for the inductive case.

Bringing the cases together and rewriting `twofib (m+1)`

to get a valid pattern, we synthesize the following definition:

```
twofib :: Int -> (Int,Int)
0 = (0,1)
twofib | n > 0 = (b,a+b)
twofib n where (a,b) = twofib (n-1)
fastfib :: Int -> Int
= fst (twofib n) fastfib n
```

Above `fst`

is the
standard prelude function to extract the first component of a pair
(i.e., a 2-tuple).

The key to the performance improvement is solving a “harder” problem:
computing `fib n`

and `fib (n+1)`

at the same time. This allows the values needed to be “passed forward”
to the “next iteration”.

In general, we can approach the synthesis of a function using the following method.

Devise a specification for the function in terms of defined functions, data, etc.

Assume the specification holds.

Using proof techniques (as if proving the specification), calculate an appropriate definition for the function.

As needed, break the synthesis calculation into cases motivated by the induction “proof” over an appropriate (well-founded) set (e.g., over natural numbers or finite lists). The inductive cases usually correspond to recursive legs of the definition.

Now let’s consider a function to generate a list of the elements
`fib 0`

through `fib n`

for some natural
number `n`

. A simple backward
recursive definition follows:

```
allfibs :: Int -> [Int]
0 = [0] -- allfibs.1
allfibs | n > 0 = allfibs (n-1) ++ [fib n] -- allfibs.2 allfibs n
```

Using `fastfib`

, each `fib n`

calculation is O(`n`

). Each `++`

call is
also O(`n`

). The `fib`

and the `++`

are “in
sequence”, so each call of `allfibs`

is just O(`n`

). However, there are O(`n`

) recursive calls of `allfibs`

, so the overall complexity is
O(`n^2`

).

We again attempt to improve the efficiency by tupling We begin with
the following specification for `fibs`

:

`= (fib n, fib (n+1), allfibs n) fibs n `

We already have definitions for the functions on the right-hand side,
`fib`

and `allfibs`

. Our task now is to synthesize
a definition for the left-hand side, `fibs`

.

We proceed by induction on the natural number `n`

and consider two cases.

**Base case n = 0:**

`0 fibs `

$=$
`{`

`fibs`

specification `}`

`0, fib (0+1), allfibs 0) (fib `

$=$
`{`

`fib.1, fib.2, allfibs.1`

`}`

`0,1,[0]) (`

This gives us a definition for the base case.

**Inductive case n = m+1**

Given that there is a definition for `fibs m`

that satisfies the
specification

`= (fib m, fib (m+1), allfibs m) fibs m `

calculate a definition for `fibs (m+1)`

that satisfies the specification.

`+1) fibs (m`

$=$
`{`

`fibs`

specification `}`

`+1), fib (m+2), allfibs (m+1)) (fib (m`

$=$
`{`

`fib.3, allfibs.2`

`}`

`+1), fib m + fib (m+1), allfibs m ++ [fib (m+1)]) (fib (m`

$=$
`{`

abstraction `}`

```
+b,c++[b])
(b,awhere (a,b,c) = (fib m, fib (m+1), allfibs m)
```

$=$
`{`

induction hypothesis `}`

```
+b,c++[b])
(b,awhere (a,b,c) = fibs m
```

This gives us a definition for the inductive case.

Bringing the cases together, we get the following definitions:

```
fibs :: Int -> (Int,Int,[Int])
0 = (0,1,[0])
fibs | n > 0 = (b,a+b,c++[b])
fibs n where (a,b,c) = fibs (n-1)
allfibs1 :: Int -> [Int]
= thd3 (fibs n) allfibs1 n
```

Above `thd3`

is the standard
prelude function to extract the third component of a 3-tuple.

We have eliminated the O(`n`

)
`fib`

calculations, but still have
an O(`n`

) append (`++`

) within
each of the O(`n`

) recursive calls
of `fibs`

. This program is better,
but is still O(`n^2`

).

Note that in the `c ++ [b]`

expression there is a single element on the right. Perhaps we could
build this term backwards using cons, an O(1) operation, and then
reverse the final result.

We again attempt to improve the efficiency by tupling. We begin with
the following specification for `fibs`

:

`= (fib n, fib (n+1), reverse (allfibs n)) fibs' n `

For convenience in calculation, we replace `reverse`

by its
backward recursive equivalent `rev`

.

```
rev :: [a] -> [a]
= [] -- rev.1
rev [] :xs) = rev xs ++ [x] -- rev.2 rev (x
```

We again proceed by induction on `n`

and consider two cases.

**Base case n = 0:**

`0 fibs' `

$=$
`{`

`fibs'`

specification `}`

`0, fib (0+1), rev (allfibs 0)) (fib `

$=$
`{`

`fib.1`

, `fib.2`

,
`allfibs.1`

`}`

`0,1, rev [0]) (`

$=$
`{`

`rev.2`

`}`

`0,1, rev [] ++ [0]) (`

$=$
`{`

`rev.1`

, `append.1`

`}`

`0,1,[0]) (`

This gives us a definition for the base case.

**Inductive case n = m+1:**

Given that there is a definition for `fibs' m`

that satisfies the
specification

`= (fib m, fib (m+1), allfibs m) fibs' m `

calculate a definition for `fibs' (m+1)`

that satisfies the specification.

`+1) fibs' (m`

$=$
`{`

`fibs'`

specification `}`

`+1), fib (m+2), rev (allfibs (m+1))) (fib (m`

$=$
`{`

`fib.3`

,
`allfibs.2`

`}`

`+1), fib m + fib (m+1), rev (allfibs m ++ [fib (m+1)])) (fib (m`

$=$
`{`

abstraction `}`

```
+b, rev (allfibs m ++ [b]))
(b, awhere (a,b,c) = (fib m, fib (m+1), rev (allfibs m))
```

$=$
`{`

induction hypothesis `}`

```
+b, rev (allfibs m ++ [b]))
(b, awhere (a,b,c) = fibs' m
```

$=$
`{`

`rev (xs ++ [x]) = x : rev xs`

,
Note 1 `}`

```
+b, b : rev (allfibs m))
(b, awhere (a,b,c) = fibs' m
```

$=$
`{`

substitution `}`

```
+b, b:c)
(b, awhere (a,b,c) = fibs' m
```

This gives us a definition for the inductive case.

Note 1: The proof of `rev (xs ++ [x]) = x : rev xs`

is left as an exercise.

Bringing the cases together, we get the following definition:

```
fibs' :: Int -> (Int,Int,[Int])
0 = (0,1,[0])
fibs' | n > 0 = (b,a+b,b:c)
fibs' n where (a,b,c) = fibs' n
allfibs2 :: Int -> [Int]
= reverse (thd3 (fibs' n)) allfibs2 n
```

Function `fibs'`

is O(`n`

). Hence, `allfibs2'`

is O(`n`

).

Are further improvements possible?

Clearly, function `fibs'`

must generate an element of the sequence for each integer in the range
`[0..n]`

.
Thus no complexity order improvement is possible.

However, from our previous experience, we know that it should be possible to avoid doing a reverse by using a tail recursive auxiliary function to compute the Fibonacci sequence. The investigation of this possible improvement is left to the reader.

For an O(log2(`n`

{.haskell)) algorithm to compute `fib n`

, see Kaldewaij’s textbook on
program derivation [16:5.2].

`drop`

from `take`

Suppose that we have the following definition for the list function
`take`

,
but no definition for `drop`

.

```
take :: Int -> [a] -> [a]
take n _ | n <= 0 = []
take _ [] = []
take n (x:xs) = x : take' (n-1) xs
```

Further suppose that we wish to synthesize a definition for `drop`

that
satisfies the following specification for any natural number `n`

and finite list `xs`

.

`take n xs ++ drop n xs = xs `

We proved this as a property earlier, given definitions for both
`take`

and
`drop`

.
The synthesis uses induction on both `n`

and `xs`

and the same cases we used in the
proof.

**Base case n = 0:**

` xs`

$=$
`{`

specification, substitution for this case
`}`

`take 0 xs ++ drop 0 xs `

$=$
`{`

`take.1`

`}`

`++ drop 0 xs [] `

$=$
`{`

`++`

identity `}`

`drop 0 xs `

This gives the equation `drop 0 xs = xs`

.

**Base case n = m+1:**

` []`

$=$
`{`

specification, substitution for this case
`}`

`take (m+1) [] ++ drop (m+1) [] `

$=$
`{`

`take.2`

`}`

`++ drop (m+1) [] [] `

$=$
`{`

`++`

identity
`}`

`drop (m+1) [] `

This gives the defining equation `drop (m+1) [] = []`

.
Since the value of the argument `(m+1)`

is not used in the above calculation, we can generalize the definition
to `drop _ [] = []`

.

**Inductive case n = m+1, xs = (a:as):**

Given that there is a definition for `drop m as`

that
satisfies the specification:

`take m as ++ drop m as = as `

calculate an appropriate definition for `drop (m+1) (a:as)`

that satisfies the specification.

`:as) (a`

$=$
`{`

specification, substitution for this case
`}`

`take (m+1) (a:as) ++ drop (m+1) (a:as) `

$=$
`{`

`take.3`

`}`

`:(take m as)) ++ drop (m+1) (a:as) (a`

$=$
`{`

`append.2`

`}`

`:(take m as ++ drop (m+1) (a:as)) a`

Hence, `a:(take m as ++ drop (m+1) (a:as)) = (a:as)`

.

`:(take m as ++ drop (m+1) (a:as)) = (a:as) a`

$\equiv$
`{`

axiom of equality of lists (Note 1) `}`

`take m as ++ drop (m+1) (a:as) = as `

$\equiv$
`{`

`m`

$\geq$
`0`

,
specification `}`

`take m as ++ drop (m+1) (a:as) = take m as ++ drop m as `

$\equiv$
`{`

equality of lists (Note 2) `}`

`drop (m+1) (a:as) = drop m as `

Because of the induction hypothesis, we know that `drop m as`

is
defined. This gives a definition for this case.

Notes:

The symbol $\equiv$ denotes logical equivalence (i.e., if and only if) and is pronounced “equivales”.

`(x:xs) = (y:ys)`

$\equiv$`x = y && xs = ys`

. In this case`x`

and`y`

both equal`a`

.`xs ++ ys = xs ++ zs`

$\equiv$`ys = zs`

can be proved by induction on`xs`

using the Note 1 property.

Bringing the cases together, we get the definition that we saw earlier.

```
drop :: Int -> [a] -> [a]
drop n xs | n <= 0 = xs -- drop.1
drop _ [] = [] -- drop.2
drop n (_:xs) = drop (n-1) xs -- drop.3
```

In Chapter 14, we
looked at two different definitions of a function to reverse the
elements of a list. Function `rev`

uses a straightforward backward linear recursive technique and `reverse`

uses a
tail recursive auxiliary function. We proved that these definitions are
equivalent.

```
rev :: [a] -> [a]
= [] -- rev.1
rev [] :xs) = rev xs ++ [x] -- rev.2
rev (x
reverse :: [a] -> [a]
reverse xs = rev' xs [] -- reverse.1
where rev' [] ys = ys -- reverse.2
:xs) ys = rev' xs (x:ys) -- reverse.3 rev' (x
```

Function `rev'`

is a
*generalization* of `rev`

.
Is there a way to calculate `rev'`

from `rev`

?

Yes, by using the Tail Recursion Theorem for lists. We develop this
theorem in a more general setting than `rev`

.

The following is based on Hoogerwoord [15:4.7].

For some types `X`

and `Y`

, let
function `fun`

be defined as
follows:

```
fun :: X -> Y
| not (b x) = f x -- fun.1
fun x | b x = h x *** fun (g x) -- fun.2
```

Functions

`b`

,`f`

,`g`

,`h`

, and`***`

are*not*defined in terms of`fun`

.`b :: X -> Bool`

such that, for any`x`

,`b x`

is defined whenever`fun x`

is defined.`g :: X -> X`

such that, for any`x`

,`g x`

is defined whenever`fun x`

is defined and`b x`

holds.`h :: X -> Y`

such that, for any`x`

,`h x`

is defined whenever`fun x`

is defined and`b x`

holds.`(***) :: Y -> Y -> Y`

such that operation`***`

is defined for all elements of`Y`

and is an associative operation with left identity`e`

.`f :: X -> Y`

such that, for any`x`

,`f x`

is defined whenever`fun x`

is defined and`not (b x)`

holds.`X`

with relation $\prec$ admits induction (i.e., $\langle\texttt{X},\prec \rangle$ is a*well-founded ordering*).For any

`x`

, if`fun x`

is defined and`b x`

holds, then`g x`

$\prec$`x`

.

Note that both `fun x`

and the
recursive leg `h x *** fun (g x)`

have the general structure `y *** fun z`

for some expressions `y`

and `z`

(i.e., `fun x = e *** fun x`

).
Thus we specify a more general function `fun'`

such that

```
fun' :: Y -> X -> Y
= y *** fun x fun' y x
```

and such that `fun'`

is
defined for any `x`

$\in$
`X`

for
which `fun x`

is defined.

Given the above specification, we note that:

` fun' e x`

$=$
`{`

`fun'`

specification `}`

`*** fun x e `

$=$
`{`

`e`

is the left identity for `***`

`}`

` fun x`

We proceed by induction on the type `X`

with
$\prec$.
(We are using *well-founded induction*, a more general form of
induction than we have used before.

We have two cases. The base case is when `not (b x)`

holds for argument `x`

of `fun'`

. The inductive case is when
`b x`

(i.e, `g x`

$\prec$
`x`

).

**Base case not (b x):**
(That is,

`x`

is a minimal element of `X`

under
$\prec$.)` fun' y x`

$=$
`{`

`fun'`

specification `}`

`*** fun x y `

$=$
`{`

`fun.1`

`}`

`*** f x y `

**Inductive case b x:** (That is,

`g x`

$\prec$
`x`

.)Given that there is a definition for `fun' y (g x)`

that satisfies the
specification for any `y`

`= y *** fun (g x) fun' y (g x) `

calculate a definition for `fun' y x`

that satisfies the
specification.

` fun' y x`

$=$
`{`

`fun'`

specification `}`

`*** fun x y `

$=$
`{`

`fun.2`

`}`

`*** (h x *** fun (g x)) y `

$=$
`{`

`***`

associativity `}`

`*** h x) *** fun (g x) (y `

$=$
`{`

`g x`

$\prec$
`x`

, induction hypothesis
`}`

`*** h x) (g x) fun' (y `

Thus we have synthesized the following tail recursive definition for
function `fun'`

and
essentially proved the Tail Recursion Theorem shown below.

```
fun' :: Y -> X -> Y
| not (b x) = y *** f x -- fun'.1
fun' y x | b x = fun' (y *** h x) (g x) -- fun'.2
```

Note that the first parameter of `fun'`

is an *accumulating
parameter*.

*Tail Recursion Theorem:*-
If
`fun`

,`fun'`

, and`e`

are defined as given above, then`fun x = fun' e x`

.

Now let’s consider the `rev`

and `rev'`

functions again.
First, let’s rewrite the definitions of `rev`

in a form similar to the
definition of `fun`

.

```
rev :: [a] -> [a]
| xs == [] = [] -- rev.1
rev xs | xs /= [] = rev (tail xs) ++ [head xs] -- rev.2
```

For `rev`

we substitute the
following for the components of the `fun`

definition:

`fun x`

$\leftarrow$`rev xs`

`b x`

$\leftarrow$`xs /= []`

`g x`

$\leftarrow$`tail xs`

`h x`

$\leftarrow$`[head xs]`

`l *** r`

$\leftarrow$`r ++ l`

(Note the flipped operands,)`f x`

$\leftarrow$`[]`

`l`

$\prec$`r`

$\leftarrow$`(length l) < (length r)`

`e`

$\leftarrow$`[]`

`fun' y x`

$\leftarrow$`rev' xs ys`

(Note the flipped arguments.)

Thus, by applying the tail recursion theorem, `fun'`

becomes the following:

```
rev' :: [a] -> [a] -> [a]
rev' xs ys | xs == [] = ys -- rev'.1
| xs /= [] = rev' (tail xs) ([head xs]++ys) -- rev'.2
```

From the Tail Recursion Theorem, we conclude that `rev xs = rev' xs []`

.

Why would we want to convert a backward linear recursive function to a tail recursive form?

A tail recursive definition is sometimes more space efficient (as we saw in Chapter 9).

This is especially the case if the strictness of an accumulating parameter can be exploited (as we saw in Chapters 9 and 15).

A tail recursive definition sometimes allows the replacement of an “expensive” operation (requiring many steps) by a less “expensive” one. (For example,

`++`

is replaced by cons in the transformation from`rev`

to`rev'`

.)A tail recursive definition can be transformed (either by hand or by a compiler) into an efficient loop.

A tail recursive definition is usually more general than its backward linear recursive counterpart. Sometimes we can exploit this generality to synthesize a more efficient definition. (We see an example of this in the next subsection.)

This section is adapted from Cohen [5:11.3].

Although the Tail Recursion Theorem is important, the technique we used to develop it is perhaps even more important. We can sometimes use the technique to transform one tail recursive definition into another that is more efficient [15].

Consider exponentiation by a natural number power. The operation
`**`

can
be defined recursively in terms of multiplication as follows:

```
infixr 8 **
(**) :: Int -> Int -> Int
** 0 = 1 -- **.1
m ** n | n > 0 = m * (m ** n) -- **.2 m
```

For `(**)`

we
substitute the following for the components of the `fun`

definition of the previous
subsection:

`fun x`

$\leftarrow$`m ** n`

`b x`

$\leftarrow$`n > 0`

(Applied only to natural numbers.)`g x`

$\leftarrow$`n - 1`

`h x`

$\leftarrow$`m`

`l *** r`

$\leftarrow$`l * r`

`f x`

$\leftarrow$`1`

`l`

$\prec$`r`

$\leftarrow$`l < r`

`e`

$\leftarrow$`1`

`fun' y x`

$\leftarrow$`exp a m n`

Thus, by applying the Tail Recursion Theorem, we define the function
`exp`

such
that

`exp a m n = a * (m ** n) `

and, in particular:

`exp 1 m n = m ** n `

The resulting function `exp`

is defined
as follows (for `n >= 0`

):

```
exp :: Int -> Int -> Int -> Int
exp a m 0 = a -- exp.1
exp a m n = exp (a*m) m n -- exp.2
```

In terms of time, this function is no more efficient than the
original version; both require O(`n`

) multiplies. (However, by exploiting
the strictness of the first parameter, `exp`

can be
made more space efficient than `**`

. )

Note that `exp`

algorithm
converges upon the final result in steps of one. Can we take advantage
of the generality of `exp`

and the
arithmetic properties of exponentiation to find an algorithm that
converges in larger steps?

Yes, we can by using the technique that we used to develop the Tail Recursion Theorem. In particular, let’s try to synthesize an algorithm that converges logarithmically (in steps of half the distance) instead of linearly.

Speaking operationally, we are looking for a “short cut” to the result. To find this short cut, we use the “maps” that we have of the “terrain”. That is, we take advantage of the properties we know about the exponentiation operator.

We thus attempt to find expressions `x`

and `y`

such that

`exp x y (n/2) = exp a m n `

where “`/`

” represents
division on integers.

For the base case where `n = 0`

,
this is trivial. We proceed with a calculation to discover values for
`x`

and `y`

that make

`exp x y (n/2) = exp a m n `

when `n > 0`

(i.e., in the inductive case). In doing this we can use the
specification for `exp`

(i.e.,`exp a m n = a * (m ** n)`

).

`exp x y (n/2) `

$=$
`{`

`exp`

specification `}`

`* (y ** (n/2)) x `

$=$
`{`

Choose `y = m ** 2`

(Note 1) `}`

`* ((m ** 2) ** (n/2)) x `

Note 1: The strategy is to make choices for `x`

and `y`

that make

`* (y ** (n/2)) x `

and

`* (m ** n) a `

equal. This choice for `y`

is
toward getting the `m ** n`

term.

Because we are dealing with integer division, we need to consider two cases because of truncation.

**Subcase even n (for
n > 0):**

`* ((m ** 2) ** (n/2)) x `

$=$
`{`

arithmetic properties of exponentiation, `n`

even `}`

`* (m ** n) x `

$=$
`{`

Choose `x = a`

, toward
getting `a * (m ** n)`

`}`

`* (m ** n) a `

$=$
`{`

`exp`

specification `}`

`exp a m n `

Thus, for even `n`

, we derive:

`exp a m n = exp a (m*m) (n/2) `

We optimize and replace `m ** 2`

by `m * m`

.

**Subcase odd n (for
n > 0):**
That is,

`n/2 = (n-1)/2`

.`* ((m ** 2) ** ((n-1)/2)) x `

$=$
`{`

arithmetic properties of exponentiation
`}`

`* (m ** (n-1)) x `

$=$
`{`

Choose `x = a * m`

,
toward getting `a * (m ** n)`

`}`

`* m) * (m ** (n-1)) (a `

$=$
`{`

arithmetic properties of exponentiation
`}`

`* (m ** n) a `

$=$
`{`

`exp`

specification `}`

`exp a m n `

Thus, for odd `n`

, we
derive:

`exp a m n = exp (a*m) (m*m) (n/2) `

To differentiate the logarithmic definition for exponentiation from
the linear one, we rename the former to `exp'`

. We have thus defined `exp'`

as follows (for `n >= 0`

):

```
exp' :: Int -> Int -> Int -> Int
0 = a -- exp'.1
exp' a m
exp' a m n | even n = exp' a (m*m) (n/2) -- exp'.2
| odd n = exp' (a*m) (m*m) ((n-1)/2) -- exp'.3
```

Above we showed that `exp a m n = exp' a m n`

.
However, execution of `exp'`

converges faster upon the result: O(log2(`n`

)) steps rather than O(`n`

)`.

Note: Multiplication and division of integers by natural number powers of $2$, particularly $2^{1}$, can be implemented on must current computers by arithmetic left and right shifts, respectively, which are faster than general multiplication and division.

Chapter 27 applies the program synthesis techniques developed in this chapter to a larger set of examples on text processing.

No subsequent chapter depends explicitly upon the program synthesis content from these chapters. However, if practiced regularly, the techniques explored in this chapter can enhance a programmer’s ability to solve problems and construct correct functional programming solutions.

The following function computes the integer base 2 logarithm of a positive integer:

`lg :: Int -> Int | x == 1 = 0 lg x | x > 1 = 1 + lg (x/2)`

Using the tail recursion theorem, write a definition for

`lg`

that is tail recursive.Synthesize the recursive definition for

`++`

from the following specification:`++ ys = foldr (:) ys xs xs`

Using tupling and function

`fact5`

from Chapter 4, synthesize an efficient function`allfacts`

to generate a list of factorials for natural numbers`0`

through parameter`n`

, inclusive.Consider the following recursive definition for natural number multiplication:

`mul :: Int -> Int -> Int 0 = 0 mul m +1) = m + mul m n mul m (n`

This is an O(

`n`

) algorithm for computing`m * n`

. Synthesize an alternative operation that is O(log2(`n`

)). Doubling (i.e.,`n*2`

) and halving (i.e.,`n/2`

with truncation) operations may be used but not multiplication (`*`

) in general.Derive a “more general” version of the Tail Recursion Theorem for functions of the shape

`func :: X -> Y | not (b x) = f x - -- func.1 ` func x | b x = h x *** func (g x) +++ d.x -- func.2`

where functions

`b`

,`f`

,`g`

, and`h`

are constrained as in the definition of`fun`

in the Tail Recursion Theorem. Be sure to identify the appropriate constraints on`d`

,`***`

, and`+++`

including the necessary properties of`***`

and`+++`

.

In Summer 2018, I adapted and revised this chapter and the next from
Chapter 12 of my *Notes on Functional Programming with Haskell*
[9].

These previous notes drew on the presentations in the first edition of the classic Bird and Wadler textbook [3] and other functional programming sources [1,2,15,17,18]. They were also influenced by my research, study, and teaching related to program specification, verification, derivation, and semantics [[4]; [5]; [6]; [7]; [8]; [10]; [11]; [12]; [13]; [14]; [16]; vanGesteren1990].

I incorporated this work as new Chapter 26, Program Synthesis (this
chapter), and new Chapter 27, Text Processing, in the 2018 version of
the textbook *Exploring Languages with Interpreters and Functional
Programming* and continue to revise it.

I retired from the full-time faculty in May 2019. As one of my post-retirement projects, I am continuing work on this textbook. In January 2022, I began refining the existing content, integrating additional separately developed materials, reformatting the document (e.g., using CSS), constructing a bibliography (e.g., using citeproc), and improving the build workflow and use of Pandoc.

I maintain this chapter as text in Pandoc’s dialect of Markdown using embedded LaTeX markup for the mathematical formulas and then translate the document to HTML, PDF, and other forms as needed.

TODO

[1]

Richard Bird. 1998. *Introduction to
functional programming using Haskell* (Second ed.).
Prentice Hall, Englewood Cliffs, New Jersey, USA.

[2]

Richard Bird. 2015. *Thinking functionall
with Haskell* (First ed.). Cambridge University Press,
Cambridge, UK.

[3]

Richard Bird and Philip Wadler. 1988.
*Introduction to functional programming* (First ed.). Prentice
Hall, Englewood Cliffs, New Jersey, USA.

[4]

K.
Mani Chandy and Jayadev Misra. 1988. *Parallel program design: A
foundation*. Addison Wesley, Boston, Massachusetts, USA.

[5]

Edward Cohen. 1990. *Programming in the
1990’s: An introduction to the calculation of programs*. Springer,
New York, New York, USA.

[6]

H.
Conrad Cunningham. 1989. The shared dataspace approach to concurrent
computation: The Swarm programming model, notation, and
logic. PhD thesis. Washington University, Department of Computer
Science, St. Louis, Missouri, USA.

[7]

H.
Conrad Cunningham. 2006. *A programmer’s introduction to predicate
logic*. University of Mississippi, Department of Computer
and Information Science, University, Mississippi, USA.
Retrieved from https://john.cs.olemiss.edu/~hcc/csci450/notes/haskell_notes.pdf

[8]

H.
Conrad Cunningham. 2006. *Notes on program semantics and
derivation*. University of Mississippi, Department of Computer
and Information Science, University, Mississippi, USA.
Retrieved from https://john.cs.olemiss.edu/~hcc/reports/umcis-1994-02.pdf

[9]

H.
Conrad Cunningham. 2014. *Notes on functional programming with
Haskell*. University of Mississippi, Department of
Computer and Information Science, University, Mississippi,
USA. Retrieved from https://john.cs.olemiss.edu/~hcc/csci450/notes/haskell_notes.pdf

[10]

Edsger W. Dijkstra. 1976. Updating a sequential
file. In *A discipline of programming*. Prentice Hall, Englewood
Cliffs, New Jersey, USA, 117--122.

[11]

Edsger W. Dijkstra and Wim H. J. Feijen. 1988.
*A method of programming*. Addison-Wesley, TBD.

[12]

Edsger W. Dijkstra and Carel S. Scholten. 1990.
*Predicate calculus and program semantics*. Springer, New York,
New York, USA.

[13]

David Gries. 1981. *Science of
programming*. Springer, New York, New York, USA.

[14]

David Gries and Fred B. Schneider. 2013. *A
logical approach to discrete math*. Springer, New York, New York,
USA.

[15]

Robert R. Hoogerwoord. 1989. The design of
functional programs: A calculational approach. PhD thesis. Eindhoven
Technical University, Eindhoven, The Netherlands.

[16]

Anne Kaldewaij. 1990. *Programming: The
derivation of algorithms*. Prentice Hall, New York, New York,
USA.

[17]

Simon Thompson. 1996. *Haskell: The craft of
programming* (First ed.). Addison-Wesley, Boston, Massachusetts,
USA.

[18]

E.
Peter Wentworth. 1990. *Introduction to functional programming using
RUFL*. Rhodes University, Department of Computer
Science, Grahamstown, South Africa.