*DISCLAIMER: This is only a theoretical idea. I have not confirmed that this could increase security of an iterative hash. Please take that into account when reading this.
I was explaining iterative hashing the other day and came up with an interesting theory: Using a weak algorithm may result in a stronger hash. The reason for this is collisions that can happen in algorithms like SHA0, SHA1, and MD5 (a collision is when two separate strings yield the exact same hash). By using a collisionable algorithm in an iterative hash we could potentially throw an attacker way off.
Iteration 701 is where things break down. The hash we had from iteration 702 (ase6ae4rha) has a collision on it. Both dfsujweru5 and 6483247435u will create that hash. In this case the attacker broke ase6ae4rha with 6483247435u not dfsujweru5. Now the attacker tries to break 6483247435u and the hash that results from that which has now put them on the totally wrong path and they will never crack this hash.
Now don’t run out and start using a lesser algorithm based on this information collisions do not happen that often. The collisions in SHA1 are only considered theoretically possible as it would take 2^69 operations to find a collision that matches an existing hash (for SHA0 it would take 2^39 operations).
As I do not have the processing power required to do this I can not calculate the chances of this actually happening. Nor can I vouch for if this is a feasible defence strategy.