Let’s say these things to do are in an array “arr”, and the action to apply to these things is called “process”. Suppose this is a language where we can pass procedures as parameters in a function.

>> define procedure process(elem)

*some process implemented here for one element.*

<< end procedure process

>> define function doSomething(process, arr, n):

if n = 1: process(first and only element of array);

else {

….let m = floor(n/2);

….doSomething(process, subarray(arr, from-index 1, to-index m), m);

….doSomething(process, subarray(arr, from-index m+1, to-index n), n-m);

}

<< end function doSomething

———————

This was my approach while processing 200+ photos from banquet today. And this was the epiphany I reached: y**ou can always divide larger problems into smaller problems, and handle the smaller problems to solve the bigger problems**.

Granted, in reality for photos, m is a uniformly distributed random variable ranging from 1 to n. But I digress.

Now we can also do this (assuming procedure process is already defined):

>> define function doSomething(process, arr, n):

for (i = 1, i <= n, i++) {

….process(arr[i]);

}

<< end function doSomething

In the computer world, both functions do the same thing in the same amount of time. The second function is more concisely written by far.

But we’re not computers; we’re humans. In the latter function, your problem *looks* like it’s size n. In the former function, your problem size decreases by a factor of 2 in each recursive call. *Psychologically*, this can make a world a difference in motivation and efficiency in completing your task.

Food for thought..

### Like this:

Like Loading...

*Related*