Models of Computation - Mathematics 450 -
Fall 2006
For HW #10
Why does our book make the convention that, in Turing machines, we use x+1
1s to represent x as input? More precisely, what would go wrong if we instead
agreed to use x 1s to represent x as input? In contrast, it would be okay
to use x 1s (rather than x+1 1s) to represent x as output --
explain why!
Suppose we use two symbols, 0 and 1, for our
Turing machine's alphabet (and allow blanks too, of course). And suppose that we represent input and output in
binary. For example, in binary, 5 is represented as 101 (1x4 + 0x2 + 1x1 =
5). We'll also agree that the left-most digit of a number should never be
zero. For example, we write 4 as 100, not as 0100. As usual, we agree that
the read/write head always starts on the left-most non-blank square.
Write a program that computes f(x) = 2x.
Write a program that computes f(x) = x+1.
(Postponed to HW #11) Determine whether each of the following functions is well-defined and
computable. (For example, the following function is not well-defined, and
hence not computable: h(x) = 1 if x = 0, h(x) = 2h(x+1) if x > 1.) Give clear
and convincing arguments to support your answers.
f(x,y) = 1 if x = 0 or y = 0; f(x,y) = f(2x,y-1) + f(x-1,2y)
if x > 0 and y > 0.
g(x,y,z) = 1 if x = 0 or y = 0 or z = 0; g(x,y,z) = g(x-1,2y,2z)
+ g(x,y-1,2z) + g(x,y,z-1) if x > 0 and y > 0 and z > 0.