snippetcppMinor
How to improve my list's speed
Viewed 0 times
improvelistspeedhow
Problem
I'm wondering how I could improve my list's element-adding speed or just the whole list implementation.
The adding function first checks if the object is unique. If not, it will just check if the new data is better (the lower the value, the better) than the old one, then swap the values or just skip adding an element.
I have no idea how to make it work faster, but I know that something here is probably done terribly wrong. I need to use that list to add about 3 million elements in less than 4 seconds.
Note: Using
The adding function first checks if the object is unique. If not, it will just check if the new data is better (the lower the value, the better) than the old one, then swap the values or just skip adding an element.
I have no idea how to make it work faster, but I know that something here is probably done terribly wrong. I need to use that list to add about 3 million elements in less than 4 seconds.
Note: Using
std::string in my project is unfortunately forbidden.#include
struct node
{
long shortestRoute;
char* name;
node *next;
node()
{
next = 0;
}
};
class List {
public:
node *first;
List() { first = 0 ;}
void add(long shortestRoute, char* name)
{
node *node = new node;
node->name = name;
node->shortestRoute = shortestRoute;
if(first == 0)
first = node;
else
{
node *temp = first;
while(temp)
{
if(!strcmp(temp->name, name) && temp->shortestRoute > shortestRoute)
{
temp->shortestRoute = shortestRoute;
return;
}
else if (!strcmp(temp->name, name))
return;
else if(temp->next)
temp = temp->next;
else
break;
}
temp->next = node;
node->next = 0;
}
}
int size()
{
node *temp = first;
int size;
if(!first)
size = 0;
else
size = 1;
while(temp->next)
{
++size;
temp = temp->next;
}
return size;
}
};
struct City
{
City() { fastConnection = true; }
int shortestRoute;
char* name;
Vector citiesSoFar;
bool fastConnection;
};Solution
If I understand correctly, you wish to maintain a list of places, and record only the shortest route to each place. This means that each time you need to find any duplicate entry, and choose which to keep.
For this task, an unordered list is pretty much the worst choice you could use. Your algorithm is O(n2) (that is, O(n) for adding each of
Given that ordering is not critical here, a hash table is really the correct implementation to use. This will get you O(1) insert and lookup, and the added programming complexity and memory use is appropriate for that number of elements. If that's not possible somehow, then second best would be some kind of self-balancing tree.
Finally, it makes no sense to count the length of a large list by iterating the elements. Just increment a counter every time you add an element. A reduction from O(n) to O(1). (In theory, if you add and destroy a lot of elements very frequently, but only query the size once in a blue moon, then my statement is untrue, but I doubt that's the case here.)
As far as your existing implementation goes, it's a bad plan to create the new node and then not use it. Maybe in your implementation a garbage collector will sort it out, but otherwise it'll be a huge memory leak, not to mention wasted effort.
For this task, an unordered list is pretty much the worst choice you could use. Your algorithm is O(n2) (that is, O(n) for adding each of
n elements), and while that isn't important for 10 destinations, it's pretty critical for 3 million!Given that ordering is not critical here, a hash table is really the correct implementation to use. This will get you O(1) insert and lookup, and the added programming complexity and memory use is appropriate for that number of elements. If that's not possible somehow, then second best would be some kind of self-balancing tree.
Finally, it makes no sense to count the length of a large list by iterating the elements. Just increment a counter every time you add an element. A reduction from O(n) to O(1). (In theory, if you add and destroy a lot of elements very frequently, but only query the size once in a blue moon, then my statement is untrue, but I doubt that's the case here.)
As far as your existing implementation goes, it's a bad plan to create the new node and then not use it. Maybe in your implementation a garbage collector will sort it out, but otherwise it'll be a huge memory leak, not to mention wasted effort.
Context
StackExchange Code Review Q#30533, answer score: 3
Revisions (0)
No revisions yet.