Models of Computation - Mathematics 450 - Spring 2005
For HW #10
Why does our book make the convention that, in Turing machines, we use x+1
1s to represent x as input? More precisely, what would go wrong if we instead
agreed to use x
1s to represent x as input? And why is it okay
to use x 1s (rather than x+1 1s) to represent x as output?
Suppose we use two symbols, 0 and 1, for our
Turing machine's alphabet (and allow blanks too, of course). And suppose that we represent input and output in
binary. For example, in binary, 5 is represented as 101 (1x4 + 0x2 + 1x1 =
5). We'll also agree that the left most digit of a number should never be
zero. For example, we write 4 as 100, not as 0100. As usual, we agree that
the read/write head always starts on the left-most non-blank square.