I'm on a Mac Pro (OS 10.4.10), and I'm trying to complete a programming assignment for my algorithms class. In it, we have to benchmark our sorting algorithm (counting sort, to be specific).
I have the sorting working perfectly. However, I'm trying to use the clock() function to benchmark, and I'm getting some strange results. For test cases 1 and 3 (1 is an already sorted vector of 10,000 ints, from 0 to 9999, and 3 is a vector of 10,000 random ints, ranging from 0 to 9999), I get 0.0000... seconds (even when I have the precision at 20 places). For test case 2 (the reverse of test case 1 - a 10,000-int vector going from 9999 to 0), I always get EXACTLY 0.01000000000000000021 seconds (no matter what else I'm doing or how many times I'm running it).
Originally, I thought "maybe my computer is just that fast and consistant". However, even if I put a large amount of extra time in there (by adding a space to put user input between two clock()s), it comes back with the same results. Also, if I just try to print out the result of a random clock(), I get 0.
This leads me to believe that either I am doing something wrong, or there's something about Mac programming that I don't know. Does anyone have any experience with this?
If it helps, here is the benchmarking portion of my code:
Results:
I have the sorting working perfectly. However, I'm trying to use the clock() function to benchmark, and I'm getting some strange results. For test cases 1 and 3 (1 is an already sorted vector of 10,000 ints, from 0 to 9999, and 3 is a vector of 10,000 random ints, ranging from 0 to 9999), I get 0.0000... seconds (even when I have the precision at 20 places). For test case 2 (the reverse of test case 1 - a 10,000-int vector going from 9999 to 0), I always get EXACTLY 0.01000000000000000021 seconds (no matter what else I'm doing or how many times I'm running it).
Originally, I thought "maybe my computer is just that fast and consistant". However, even if I put a large amount of extra time in there (by adding a space to put user input between two clock()s), it comes back with the same results. Also, if I just try to print out the result of a random clock(), I get 0.
This leads me to believe that either I am doing something wrong, or there's something about Mac programming that I don't know. Does anyone have any experience with this?
If it helps, here is the benchmarking portion of my code:
Code:
// Prepare to benchmark
clock_t start1;
clock_t end1;
double time1;
clock_t start2;
clock_t end2;
double time2;
clock_t start3;
clock_t end3;
double time3;
start1 = clock();
countingSort(input1, output1, ARR_UPPER_BOUND); // Sort input1
end1 = clock();
time1 = ((double)end1 - (double)start1) / CLOCKS_PER_SEC;
start2 = clock();
countingSort(input2, output2, ARR_UPPER_BOUND); // Sort input2
end2 = clock();
time2 = ((double)end2 - (double)start2) / CLOCKS_PER_SEC;
start3 = clock();
countingSort(input3, output3, ARR_UPPER_BOUND); // Sort input3
end3 = clock();
time3 = ((double)end3 - (double)start3) / CLOCKS_PER_SEC;
// Show benchmarks
printf("Time to run test case 1: %.20lf \n", time1);
printf("Time to run test case 2: %.20lf \n", time2);
printf("Time to run test case 3: %.20lf \n", time3);
Results:
Code:
Time to run test case 1: 0.00000000000000000000
Time to run test case 2: 0.01000000000000000021
Time to run test case 3: 0.00000000000000000000