Table of Contents

Time and space

In Computability Theory, we have used classes such as $ R$ and $ RE$ in order to establish a hierarchy of hardness for problem solvability (in terms of the Turing Machine).

We now take into account the resources spent by the Turing Machine, during the computation process.

Let us consider a few statements regarding resources. Which do you think is a resource that must be taken into account?

Measuring resources

The amount of spent resources (time/space) by a Turing Machine $ M$ may be expressed as functions: $ \mathcal{T}_M, \mathcal{S}_M : \Sigma^* \rightarrow \mathbb{N}$ where $ \mathcal{T}_M(w)$ (resp. $ \mathcal{S}_M(w)$ ) is the number of steps performed (resp. tape cells used) by $ M$ , when running on input $ w$ .

This definition suffers from un-necessary overhead, which makes time and space analysis difficult. We formulate some examples to illustrate why this is the case:

Algorithm ($ Alg(n)$ ):

$ \mathbf{while} \mbox{ } n \lt 100$

$ \quad n=n+1$

$ \mathbf{return} \mbox{ } 1$

We note that $ Alg$ runs 100 steps for $ n=0$ while only one step, for $ n \geq 100$ . However, in practice, it is often considered that each input is as likely to occur as any other.

In general, it is inconvenient to account for the number of transitions of a Turing Machine w.r.t. to the value of the input. However, the following is a straightforward observation:

Definition (Running time of a TM):

The running time of a Turing Machine $ M$ is given by $ \mathcal{T}_M : \mathbb{N} \rightarrow \mathbb{N}$ iff: $ \forall \omega \in \Sigma^*$ : the nb. of transitions performed by $ M$ is at most $ \mathcal{T}_M(\mid \omega \mid)$ .

Consumed space of a TM

The above examples motivate a more careful accounting for the consumed space. Informally:

Common mistakes when accounting for time and space

Consider the following algorithm:

Algorithm ($ P(n)$ ):

$ i=0, s = 0$

$ \mathbf{while} \mbox{ } i \lt n$

$ \quad s=s+i$

$ \quad i=i+1$

$ \mathbf{return} \mbox{ } 1$

The encoding does not matter

Proposition:

Let $ f$ be a problem which is decidable by a Turing Machine with alphabet $ Sigma$ and with running time $ T$ . Then $ f$ is decidable in time $ 4 \cdot log (K) T(n)$ by a Turing Machine with alphabet $ \{ 0,1,\#\}$ , where $ K = \mid\Sigma\mid$ .

Proof:

We build a Turing Machine $ M^*$ with the desired property, by relying on $ M$ - the TM which decides $ f$ , as follows:

  • we encode each symbol of $ \Sigma$ using $ log(K)$ bits;
  • we simulate $ M$ as follows:
    • we read the symbol from the current cell in $ log(K)$ steps; In order to do so, we need to create (at most) $ 2^0 + 2^1 + \ldots + 2^{K} = 2^{K+1}-1$ states, for each existing state in M. These states are connected as a complete binary tree. Each level $ i$ consisting of $ 2^i$ states are responsible for reading the first $ i$ bits from the tape.
    • for writing each symbol, we require $ log(K)$ states, and the process takes exactly $ log(K)$ steps (moving backwards over the read symbol).
    • for moving the head, we require $ 0$ , $ log(K)$ states (and steps) depending on the direction (hold and left/right, respectively.
    • we also require do-nothing transitions, between the reading, writing and head moving phases. We finally require a transition to take us to the next-state.

Hence, at most $ 4log(K)$ steps are performed by $ M*$ per each transition of $ M$ .

Towards resources consumed by programs

Most observations which we have done for the Turing Machine, extend naturally to programming languages:

Discussion

Consider the following program:

read(i);
while (i>0){
   i++;
}

What is the consumed time ?

Case study

The complexity of Merge-sort:

int* mergesort (int* v, int n) {
   int* v1 = mergesort (v, n/2);
   int* v2 = mergesort (v+n/2, n);
   return merge(v1, n/2, v2, n - n/2);
}

where:

int* merge(int* v1, int n1, int* v2, int n2){                     
 	int* r = malloc((n1+n2)*sizeof(int));          
 	int i=0,j=0,k=0;                               
 	while (i<n1 && j<n2)                           
 		if (v1[i]>v2[j])                                        
 			r[k++] = v2[j++];                      
 		else
 			r[k++] = v1[i++];                       
 	
 	while (i<n1)                                   
 		r[k++] = v1[i++];

 	while (j<n2)                                   
 		r[k++] = v2[j++]
 	
 	return r;

 }