HiveBrain v1.2.0
Get Started
← Back to all entries
patterncppMinor

Concurrent for loop in C++

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
loopconcurrentfor

Problem

(See the next iteration.)

I have this easy to use facility that maps input elements to output elements concurrently by the means of a thread pool:

concurrent.h:

```
#ifndef FORP_H
#define FORP_H

#include
#include
#include
#include
#include

namespace net {

namespace coderodde {

namespace concurrent {
////////////////////////////////////////////////////////////////////
// This is an adhoc concurrent queue used by forp. //
////////////////////////////////////////////////////////////////////
template
class queue
{
private:

struct queue_node
{
T m_element;
size_t m_element_index;
queue_node* m_next;

queue_node(const T& element, const size_t index) :
m_element{element},
m_element_index{index},
m_next{nullptr}
{

}
};

std::mutex m_mutex;
queue_node* m_head;
queue_node* m_tail;

public:

queue(std::initializer_list list)
{
m_head = nullptr;
size_t index = 0;

for (const auto& element : list)
{
queue_node* new_node = new queue_node(element,
index++);

if (m_head == nullptr)
{
m_head = new_node;
m_tail = new_node;
}
else
{

Solution

Just a few items which caught my eye:

-
I wouldn't have bothered implementing my own queue. Just use a std::deque or a plain std::vector with an index pointing to the current head element. Saves a bunch of code which you don't have to test and maintain.

-
You shouldn't use the mutex directly, you should use a std::lock_guard instead to make sure the mutex gets released automatically when the scope is left.

-
You should reduce the scope of the mutex to a minimum to avoid unnecessary lock contention (in this case probably more of an academic point but still a good habit to get into).

So the dequeue method could look like this:

std::tuple dequeue() 
{   
    queue_node* item = nullptr;

    {
        std::lock_guard lock(m_mutex);
        if (m_head != nullptr)
        {
            item = m_head;
            m_head = m_head->next;
        }
    }

    return std::make_tuple(item ? item->m_element : T(),
                           item ? item->m_element_index : 0,
                           item != nullptr);
}


-
This:

output_vector.clear();
output_vector.reserve(input_list.size());
for (size_t i = 0; i < input_list.size(); ++i) 
{
    output_vector.push_back(Out());
}


Can be replaced with

output_vector.clear()
 output_vector.resize(input_list.size());


since resize will automatically insert elements for you if the current size is smaller than the requested size.

Update: Actually I just noticed that your queue implementation is leaking memory: nodes get new-ed but never deleted. Which comes back to my first point :)

Also you're copying the In and Out elements around a few times when you probably could just move them but I don't do enough day-to-day modern C++ to provide a correct answer on the spot right now. I'll leave that to someone else.

Code Snippets

std::tuple<T, size_t, bool> dequeue() 
{   
    queue_node* item = nullptr;

    {
        std::lock_guard<std::mutex> lock(m_mutex);
        if (m_head != nullptr)
        {
            item = m_head;
            m_head = m_head->next;
        }
    }

    return std::make_tuple(item ? item->m_element : T(),
                           item ? item->m_element_index : 0,
                           item != nullptr);
}
output_vector.clear();
output_vector.reserve(input_list.size());
for (size_t i = 0; i < input_list.size(); ++i) 
{
    output_vector.push_back(Out());
}
output_vector.clear()
 output_vector.resize(input_list.size());

Context

StackExchange Code Review Q#133883, answer score: 3

Revisions (0)

No revisions yet.