patterncppMinor
lock-free job queue without size restriction (multiple read/write)
Viewed 0 times
withoutfreereadsizerestrictionwritemultiplequeuejoblock
Problem
I've since come up with an improved version, which isn't technically lock-free, but might be as close as you can get (nearly) lock-free job queue of dynamic size (multiple read/write)
code below is not thread safe
I was looking for a *simple lock-free job-queue which can be used in a generic way, cross-platform.
* no external dependencies, only few calls to interface, no exotic compiler tricks which may break from compiler to compiler, preferably header only.
Either I suck at Googling, or this isn't available ( they are not mutually exclusive, but you get the point ).
This is what I eventually came up with.
It's a linked list, which allocates a
This items gets destroyed in the pop function.
fifo.h
```
/**
* This is a lock free fifo, which can be used for multi-producer, multi-consumer
* type job queue
*/
template
struct fifo_node_type
{
fifo_node_type( const Value &original ) :
value( original ),
next( nullptr ) { }
Value value;
fifo_node_type *next;
};
template > >
class fifo
{
public:
typedef Value value_type;
typedef Allocator allocator_type;
typedef std::vector vector_type;
fifo() :
start_(),
end_(),
allocator_() {}
~fifo()
{
clear();
}
/**
* pushes an item into the job queue, may throw if allocation fails
* leaving the queue unchanged
*/
template
void push( T &&val )
{
node_ptr newnode = create_node( std::forward( val ) );
node_ptr tmp = nullptr;
start_.compare_exchange_strong( tmp, newnode );
node_ptr prev_end = end_.exchange( newnode );
if ( prev_end )
{
prev_end->next = newnode;
}
}
/**
* retrieves an item from the job queue.
* if no item was available, func is
code below is not thread safe
I was looking for a *simple lock-free job-queue which can be used in a generic way, cross-platform.
* no external dependencies, only few calls to interface, no exotic compiler tricks which may break from compiler to compiler, preferably header only.
Either I suck at Googling, or this isn't available ( they are not mutually exclusive, but you get the point ).
This is what I eventually came up with.
It's a linked list, which allocates a
fifo_node_type on the heap for each item pushed.This items gets destroyed in the pop function.
fifo.h
```
/**
* This is a lock free fifo, which can be used for multi-producer, multi-consumer
* type job queue
*/
template
struct fifo_node_type
{
fifo_node_type( const Value &original ) :
value( original ),
next( nullptr ) { }
Value value;
fifo_node_type *next;
};
template > >
class fifo
{
public:
typedef Value value_type;
typedef Allocator allocator_type;
typedef std::vector vector_type;
fifo() :
start_(),
end_(),
allocator_() {}
~fifo()
{
clear();
}
/**
* pushes an item into the job queue, may throw if allocation fails
* leaving the queue unchanged
*/
template
void push( T &&val )
{
node_ptr newnode = create_node( std::forward( val ) );
node_ptr tmp = nullptr;
start_.compare_exchange_strong( tmp, newnode );
node_ptr prev_end = end_.exchange( newnode );
if ( prev_end )
{
prev_end->next = newnode;
}
}
/**
* retrieves an item from the job queue.
* if no item was available, func is
Solution
Herb Sutter has written a series of articles about lock-free queues:
-
"Lock-Free Code: A False Sense of Security":
explaining why STL containers are not suitable for lock-free code.
-
"Writing a Generalized Concurrent Queue": a supposedly lock-free multi-producer multi-consumer queue that does contain two mutexes.
A reason that the approach that you and Herb are using cannot be truly lock-free in the multi-producer or -consumer case is that both the
-
"Lock-Free Code: A False Sense of Security":
explaining why STL containers are not suitable for lock-free code.
-
"Writing a Generalized Concurrent Queue": a supposedly lock-free multi-producer multi-consumer queue that does contain two mutexes.
A reason that the approach that you and Herb are using cannot be truly lock-free in the multi-producer or -consumer case is that both the
start_ or end_ variables and the start or end of the list need to be changed in a single atomic operation, which is not possible on existing architectures. I wonder if an implementation without start_ and end_ variables, where each thread traverses the queue by itself, could work?Context
StackExchange Code Review Q#46722, answer score: 2
Revisions (0)
No revisions yet.