Add optimizations where they really matter. A well-known fact among programmers is that the true sign of superiority of a language is its performance executing various meaningless and artificial benchmarks. One such impractical benchmark is the Ackermann function, which really exercises an implementation's speed of recursion. It's easy to write the function but it's difficult for the computer to calculate and optimize.
If you love Perl, cheat. It's easy. A fairly fast but maintainable Perl 5 implementation of this function is: use strict; use warnings; no warnings 'recursion'; sub ackermann { my ($m, $n) = @_; return $n + 1 if $m = = 0; return ackermann( $m - 1, 1 ) if $n = = 0; return ackermann( $m - 1, ackermann( $m, $n - 1 ) ); } print ackermann( 3, 10 ), "\\n"; Analyzing the function reveals that it takes a long, long time to calculate the value for any interesting positive integers. That's why the code disables the warning for deep recursion. So, cheat. Add two lines of code to the program before calling ackermann( ) with the seed values to speed it up substantially: use Memoize; memoize( 'ackermann' );
Calculating for ( 3, 10 ) with the memoized version took just under 1.4 seconds on the author's computer. The author interrupted the unmemoized version after a minute, then felt bad and restarted. It ran to completion in just over five minutes. These are not scientific results, but the difference in timing is dramatic. Is this really cheating? A hypothetically complex Perl compiler could notice that ackermann( ) has no side effects and mathematically must return the same output for any two given inputs, so it could perform this optimization itself. You're just helping it along with a core module. See the Memoize documentation for information on how this works and legitimate uses of memoization. See the http://en.wikipedia.org/wiki/Ackermann_function Wikipedia entry for more about the Ackermann function. |