patterncsharpMinor
Asynchronous version of AutoResetEvent
Viewed 0 times
asynchronousversionautoresetevent
Problem
This is my second attempt to create asynchronous version of AutoResetEvent.
At first I tried to make it completely lock-less, but it turned out to be impossible.
This implementation contains a lock however the lock isn't an instance wide:
```
///
/// Asynchronous version of
///
public sealed class AutoResetEventAsync {
private static readonly Task Completed = Task.FromResult(true);
private readonly ConcurrentQueue> handlers =
new ConcurrentQueue>();
private int isSet;
///
/// Initializes a new instance of the class with a Boolean value indicating whether to set the intial state to signaled.
///
/// true to set the initial state signaled; false to set the initial state to nonsignaled.
public AutoResetEventAsync(bool initialState) {
this.isSet = initialState ? 1 : 0;
}
///
/// Sets the state of the event to signaled, allowing one waiting continuation to proceed.
///
public void Set() {
if (!this.TrySet())
return;
TaskCompletionSource handler;
// Notify first alive handler
while (this.handlers.TryDequeue(out handler))
if (CheckIfAlive(handler)) // Flag check
lock (handler) {
if (!CheckIfAlive(handler))
continue;
if (this.TryReset())
handler.SetResult(true);
else
this.handlers.Enqueue(handler);
break;
}
}
///
/// Try to switch the state to signaled from not signaled
///
///
/// true if suceeded, false if failed
///
private bool TrySet() {
return Interlocked.CompareExchange(ref this.isSet, 1, 0) == 0;
}
///
/// Waits for a signal asynchronously
///
public Task WaitAsync() {
return this.WaitAsync(CancellationToken.None);
}
///
/// Waits
At first I tried to make it completely lock-less, but it turned out to be impossible.
This implementation contains a lock however the lock isn't an instance wide:
```
///
/// Asynchronous version of
///
public sealed class AutoResetEventAsync {
private static readonly Task Completed = Task.FromResult(true);
private readonly ConcurrentQueue> handlers =
new ConcurrentQueue>();
private int isSet;
///
/// Initializes a new instance of the class with a Boolean value indicating whether to set the intial state to signaled.
///
/// true to set the initial state signaled; false to set the initial state to nonsignaled.
public AutoResetEventAsync(bool initialState) {
this.isSet = initialState ? 1 : 0;
}
///
/// Sets the state of the event to signaled, allowing one waiting continuation to proceed.
///
public void Set() {
if (!this.TrySet())
return;
TaskCompletionSource handler;
// Notify first alive handler
while (this.handlers.TryDequeue(out handler))
if (CheckIfAlive(handler)) // Flag check
lock (handler) {
if (!CheckIfAlive(handler))
continue;
if (this.TryReset())
handler.SetResult(true);
else
this.handlers.Enqueue(handler);
break;
}
}
///
/// Try to switch the state to signaled from not signaled
///
///
/// true if suceeded, false if failed
///
private bool TrySet() {
return Interlocked.CompareExchange(ref this.isSet, 1, 0) == 0;
}
///
/// Waits for a signal asynchronously
///
public Task WaitAsync() {
return this.WaitAsync(CancellationToken.None);
}
///
/// Waits
Solution
At first I tried to make it completely lock-less, but it turned out to be impossible. This implementation contains a lock however the lock isn't an instance wide.
It may shock you to hear that lock-free code may be slower than code using locking.
Lock-free code is usually longer and is definitely much more complex. You should not attempt to write lock-free code until you fully understand the .NET memory model. Joe Duffy's book would be a good starting point after reading those articles. I would say you are ready to write lock-free code when you can describe why double-checked locking is broken in Java and correctly answer this question: is it also broken in C#?
But back to the point: lock-free code is easily slower than regular locking in this instance because lock-free code is only faster when there's a lot of contention. When you're writing
So, on with the actual code review...
As @svick correctly pointed out, you have a (benign) race condition in your code that may cause starvation, when your
But that's not a deal-breaker, it's just somewhat unfair (starvation) and inefficient (holding references). There's a more sinister and subtle problem with this implementation.
You are calling back to end-user code while holding a lock. This is violating one of the central laws of multithreaded programming. Whenever you do this, there is a possibility of deadlock.
It's very subtle because the callback is not obvious, but it's there in both
It may shock you to hear that lock-free code may be slower than code using locking.
Lock-free code is usually longer and is definitely much more complex. You should not attempt to write lock-free code until you fully understand the .NET memory model. Joe Duffy's book would be a good starting point after reading those articles. I would say you are ready to write lock-free code when you can describe why double-checked locking is broken in Java and correctly answer this question: is it also broken in C#?
But back to the point: lock-free code is easily slower than regular locking in this instance because lock-free code is only faster when there's a lot of contention. When you're writing
async-compatible primitives, any locks you take are only going to last for the synchronous portion of that method call - an extremely short time. So, unless you have literally hundreds of different threads all calling Set or WaitAsync on the same instance at the same time, you won't have a problem. In other words, async primitives kind of "lift" you to a higher level of scheduling - you actually build your own queue of waiters. And any lock you have isn't going to affect the real contention, which will be represented by that queue.So, on with the actual code review...
As @svick correctly pointed out, you have a (benign) race condition in your code that may cause starvation, when your
Set chooses to re-enqueue a TCS. You are also holding onto resources longer than necessary by keeping "dead" TCS instances in your queue. Remember that each TCS has a Task along with all its continuations, and any local variables captured by their lambdas, etc.But that's not a deal-breaker, it's just somewhat unfair (starvation) and inefficient (holding references). There's a more sinister and subtle problem with this implementation.
You are calling back to end-user code while holding a lock. This is violating one of the central laws of multithreaded programming. Whenever you do this, there is a possibility of deadlock.
It's very subtle because the callback is not obvious, but it's there in both
Set and Reset: SetResult and SetCanceled both (synchronously) invoke any task continuations that were scheduled with the ExecuteSynchronously flag.Context
StackExchange Code Review Q#23783, answer score: 7
Revisions (0)
No revisions yet.