patterncMinor
Comparing 32- and 64-bit
Viewed 0 times
andbitcomparing
Problem
A few days ago I saw this question on Stack Overflow.
I have tried to make a little demo to show the speed advantage of using 64-bit memory operations compared to 32-bit ones. So I wrote this code:
I compile it under XCode on a MacBook Air running OSX Lion. This allow me to change the build target to 32- or 64-bit. When I run the program (and verify that the program is actually running in the correct bit-mode) I see no real difference in time.
Is this the right way to measure performance difference between 32- and 64-bit?
Is XCode/the compiler playing tricks on me?
Is there anything else wrong?
Note Sorry for the C. C isn't my native tongue ;-)
Solution findings
Writing 100 MB memory executing in 64-bit mode using 64-bit integer pointers takes ~47 seconds. Exceuting in 32-bit mode using 64-bit integer pointers takes ~70 seconds. Executing in 32-bit mode using 32-bit integer pointers takes ~90 seconds.
I have tried to make a little demo to show the speed advantage of using 64-bit memory operations compared to 32-bit ones. So I wrote this code:
#include
#include
#include
#define MEGABYTES 1024 * 1024
#define MEMORY_SIZE 100
#define LOOPS_START 100
#define LOOPS_STOP 10001
int main (int argc, const char * argv[])
{
clock_t start, stop;
int64_t* big_pile = malloc(MEGABYTES * MEMORY_SIZE * sizeof(char));
for (int loops = LOOPS_START; loops < LOOPS_STOP; loops += 100)
{
start = clock();
for (int i = 0; i < loops; i++)
{
for (int pos = 0; pos < MEGABYTES * MEMORY_SIZE * sizeof(char) / sizeof(int64_t); pos++) {
big_pile[pos] = big_pile[pos] ^ big_pile[pos];
}
}
stop = clock();
double elapsed = (double)(stop - start) / CLOCKS_PER_SEC * 1000;
double average = elapsed / loops;
printf("Loops:\t%i\tElapsed time:\t%f\tAverage:\t%f\n", loops, elapsed, average);
}
free(big_pile);
}I compile it under XCode on a MacBook Air running OSX Lion. This allow me to change the build target to 32- or 64-bit. When I run the program (and verify that the program is actually running in the correct bit-mode) I see no real difference in time.
Is this the right way to measure performance difference between 32- and 64-bit?
Is XCode/the compiler playing tricks on me?
Is there anything else wrong?
Note Sorry for the C. C isn't my native tongue ;-)
Solution findings
Writing 100 MB memory executing in 64-bit mode using 64-bit integer pointers takes ~47 seconds. Exceuting in 32-bit mode using 64-bit integer pointers takes ~70 seconds. Executing in 32-bit mode using 32-bit integer pointers takes ~90 seconds.
Solution
You are using
I would recommend that you move different values around instead of xoring the same values together, and I would recommend that you use int32_t for the 32-bit version, and int64_t for the 64-bit version.
int64_t, so most of your code gets compiled into the same machine code regardless of the bitness of the target. Also, A^A = 0, so the compiler might be optimizing your entire for loop with a single call to memset with zero.I would recommend that you move different values around instead of xoring the same values together, and I would recommend that you use int32_t for the 32-bit version, and int64_t for the 64-bit version.
Context
StackExchange Code Review Q#7685, answer score: 2
Revisions (0)
No revisions yet.