Main Page | Class Hierarchy | Alphabetical List | Class List | Directories | File List | Class Members | File Members | Related Pages

AbstractLock::Node Class Reference

List of all members.

Public Member Functions

int v_waitStatus ()
void v_waitStatus (int val)
Node v_prev ()
void v_prev (Node val)
Node v_next ()
void v_next (Node val)
Thread v_thread ()
void v_thread (Thread val)
bool isShared ()
Node predecessor ()
 this ()
 this (Thread thread, Node mode)
 this (Thread thread, int waitStatus)

Static Public Member Functions

static this ()

Public Attributes

const int CANCELLED = 1
const int SIGNAL = -1
const int CONDITION = -2
int waitStatus
Node prev
Node next
Thread thread
Node nextWaiter

Static Public Attributes

static Node SHARED
static Node EXCLUSIVE

Detailed Description

Wait queue node class.

The wait queue is a variant of a "CLH" (Craig, Landin, and Hagersten) lock queue. CLH locks are normally used for spinlocks. We instead use them for blocking synchronizers, but use the same basic tactic of holding some of the control information about a thread in the predecessor of its node. A "status" field in each node keeps track of whether a thread should block. A node is signalled when its predecessor releases. Each node of the queue otherwise serves as a specific-notification-style monitor holding a single waiting thread. The status field does NOT control whether threads are granted locks etc though. A thread may try to acquire if it is first in the queue. But being first does not guarantee success; it only gives the right to contend. So the currently released contender thread may need to rewait.

To enqueue into a CLH lock, you atomically splice it in as new tail. To dequeue, you just set the head field.

      +------+  prev +-----+       +-----+
 head |      | <---- |     | <---- |     |  tail
      +------+       +-----+       +-----+
 

Insertion into a CLH queue requires only a single atomic operation on "tail", so there is a simple atomic point of demarcation from unqueued to queued. Similarly, dequeing involves only updating the "head". However, it takes a bit more work for nodes to determine who their successors are, in part to deal with possible cancellation due to timeouts.

The "prev" links (not used in original CLH locks), are mainly needed to handle cancellation. If a node is cancelled, its successor is (normally) relinked to a non-cancelled predecessor. For explanation of similar mechanics in the case of spin locks, see the papers by Scott and Scherer at http://www.cs.rochester.edu/u/scott/synchronization/

We also use "next" links to implement blocking mechanics. The thread id for each node is kept in its own node, so a predecessor signals the next node to wake up by traversing next link to determine which thread it is. Determination of successor must avoid races with newly queued nodes to set the "next" fields of their predecessors. This is solved when necessary by checking backwards from the atomically updated "tail" when a node's successor appears to be null. (Or, said differently, the next-links are an optimization so that we don't usually need a backward scan.)

Cancellation introduces some conservatism to the basic algorithms. Since we must poll for cancellation of other nodes, we can miss noticing whether a cancelled node is ahead or behind us. This is dealt with by always unparking successors upon cancellation, allowing them to stabilize on a new predecessor.

CLH queues need a dummy header node to get started. But we don't create them on construction, because it would be wasted effort if there is never contention. Instead, the node is constructed and head and tail pointers are set upon first contention.

Threads waiting on Conditions use the same nodes, but use an additional link. Conditions only need to link nodes in simple (non-concurrent) linked queues because they are only accessed when exclusively held. Upon await, a node is inserted into a condition queue. Upon signal, the node is transferred to the main queue. A special value of status field is used to mark which queue a node is on.

Thanks go to Dave Dice, Mark Moir, Victor Luchangco, Bill Scherer and Michael Scott, along with members of JSR-166 expert group, for helpful ideas, discussions, and critiques on the design of this class.

Definition at line 315 of file LockImpl.d.


Member Function Documentation

static this  )  [inline, static]
 

Definition at line 327 of file LockImpl.d.

References EXCLUSIVE, Node, and SHARED.

int v_waitStatus  )  [inline]
 

Definition at line 361 of file LockImpl.d.

References waitStatus.

Referenced by AbstractLock::fullyRelease(), AbstractLock::setHeadAndPropagate(), AbstractLock::shouldParkAfterFailedAcquire(), and AbstractLock::transferForNotify().

void v_waitStatus int  val  )  [inline]
 

Definition at line 362 of file LockImpl.d.

References waitStatus.

Node v_prev  )  [inline]
 

Definition at line 376 of file LockImpl.d.

References prev.

Referenced by AbstractLock::addWaiter(), AbstractLock::enq(), AbstractLock::findNodeFromTail(), and AbstractLock::shouldParkAfterFailedAcquire().

void v_prev Node  val  )  [inline]
 

Definition at line 377 of file LockImpl.d.

References prev.

Node v_next  )  [inline]
 

Definition at line 391 of file LockImpl.d.

References next.

Referenced by AbstractLock::acquireQueued(), AbstractLock::addWaiter(), AbstractLock::doAcquireNanos(), AbstractLock::doAcquireShared(), AbstractLock::doAcquireSharedNanos(), AbstractLock::enq(), and AbstractLock::setHeadAndPropagate().

void v_next Node  val  )  [inline]
 

Definition at line 392 of file LockImpl.d.

References next.

Thread v_thread  )  [inline]
 

Definition at line 399 of file LockImpl.d.

References thread.

Referenced by AbstractLock::transferForNotify().

void v_thread Thread  val  )  [inline]
 

Definition at line 400 of file LockImpl.d.

References thread.

bool isShared  )  [inline]
 

Returns true if node is waiting in shared mode

Definition at line 416 of file LockImpl.d.

References nextWaiter, and SHARED.

Referenced by AbstractLock::setHeadAndPropagate().

Node predecessor  )  [inline]
 

Returns previous node. Use when predecessor cannot be null.

Returns:
the predecessor of this node

Definition at line 424 of file LockImpl.d.

References prev.

Referenced by AbstractLock::acquireQueued(), AbstractLock::doAcquireNanos(), AbstractLock::doAcquireShared(), and AbstractLock::doAcquireSharedNanos().

this  )  [inline]
 

Definition at line 428 of file LockImpl.d.

this Thread  thread,
Node  mode
[inline]
 

Definition at line 431 of file LockImpl.d.

References nextWaiter, and thread.

this Thread  thread,
int  waitStatus
[inline]
 

Definition at line 436 of file LockImpl.d.

References thread, and waitStatus.


Member Data Documentation

const int CANCELLED = 1
 

waitStatus value to indicate thread has cancelled

Definition at line 317 of file LockImpl.d.

Referenced by AbstractLock::cancelAcquire(), and AbstractLock::fullyRelease().

const int SIGNAL = -1
 

waitStatus value to indicate thread needs unparking

Definition at line 319 of file LockImpl.d.

Referenced by AbstractLock::shouldParkAfterFailedAcquire(), AbstractLock::transferForNotify(), and AbstractLock::unparkSuccessor().

const int CONDITION = -2
 

waitStatus value to indicate thread is waiting on condition

Definition at line 321 of file LockImpl.d.

Referenced by AbstractLock::isOnSyncQueue(), AbstractLock::transferAfterCancelledWait(), and AbstractLock::transferForNotify().

Node SHARED [static]
 

Marker to indicate a node is waiting in shared mode

Definition at line 323 of file LockImpl.d.

Referenced by isShared(), and this().

Node EXCLUSIVE [static]
 

Marker to indicate a node is waiting in exclusive mode

Definition at line 325 of file LockImpl.d.

Referenced by this().

int waitStatus
 

Status field, taking on only the values: SIGNAL: The successor of this node is (or will soon be) blocked (via park), so the current node must unpark its successor when it releases or cancels. To avoid races, acquire methods must first indicate they need a signal, then retry the atomic acquire, and then, on failure, block. CANCELLED: Node is cancelled due to timeout Nodes never leave this state. In particular, a thread with cancelled node never again blocks. CONDITION: Node is currently on a condition queue It will not be used as a sync queue node until transferred. (Use of this value here has nothing to do with the other uses of the field, but simplifies mechanics.) 0: None of the above

The values are arranged numerically to simplify use. Non-negative values mean that a node doesn't need to signal. So, most code doesn't need to check for particular values, just for sign.

The field is initialized to 0 for normal sync nodes, and CONDITION for condition nodes. It is modified only using CAS.

Definition at line 360 of file LockImpl.d.

Referenced by AbstractLock::cancelAcquire(), AbstractLock::isOnSyncQueue(), AbstractLock::release(), AbstractLock::releaseShared(), AbstractLock::shouldParkAfterFailedAcquire(), this(), AbstractLock::transferAfterCancelledWait(), AbstractLock::transferForNotify(), AbstractLock::unparkSuccessor(), and v_waitStatus().

Node prev
 

Link to predecessor node that current node/thread relies on for checking waitStatus. Assigned during enqueing, and nulled out (for sake of GC) only upon dequeuing. Also, upon cancellation of a predecessor, we short-circuit while finding a non-cancelled one, which will always exist because the head node is never cancelled: A node becomes head only as a result of successful acquire. A cancelled thread never succeeds in acquiring, and a thread only cancels itself, not any other node.

Definition at line 375 of file LockImpl.d.

Referenced by AbstractLock::enq(), AbstractLock::fullGetFirstQueuedThread(), AbstractLock::getExclusiveQueuedThreads(), AbstractLock::getQueuedThreads(), AbstractLock::getQueueLength(), AbstractLock::getSharedQueuedThreads(), AbstractLock::isOnSyncQueue(), AbstractLock::isQueued(), predecessor(), AbstractLock::setHead(), AbstractLock::shouldParkAfterFailedAcquire(), AbstractLock::unparkSuccessor(), and v_prev().

Node next
 

Link to the successor node that the current node/thread unparks upon release. Assigned once during enqueuing, and nulled out (for sake of GC) when no longer needed. Upon cancellation, we cannot adjust this field, but can notice status and bypass the node if cancelled. The enq operation does not assign next field of a predecessor until after attachment, so seeing a null next field does not necessarily mean that node is at end of queue. However, if a next field appears to be null, we can scan prev's from the tail to double-check.

Definition at line 390 of file LockImpl.d.

Referenced by AbstractLock::enq(), AbstractLock::fullGetFirstQueuedThread(), AbstractLock::isOnSyncQueue(), AbstractLock::unparkSuccessor(), and v_next().

Thread thread
 

The thread that enqueued this node. Initialized on construction and nulled out after use.

Definition at line 398 of file LockImpl.d.

Referenced by AbstractLock::cancelAcquire(), AbstractLock::fullGetFirstQueuedThread(), AbstractLock::getExclusiveQueuedThreads(), AbstractLock::getQueuedThreads(), AbstractLock::getSharedQueuedThreads(), AbstractLock::setHead(), this(), AbstractLock::unparkSuccessor(), and v_thread().

Node nextWaiter
 

Link to next node waiting on condition, or the special value SHARED. Because condition queues are accessed only when holding in exclusive mode, we just need a simple linked queue to hold nodes while they are waiting on conditions. They are then transferred to the queue to re-acquire. And because conditions can only be exclusive, we save a field by using special value to indicate shared mode.

Definition at line 411 of file LockImpl.d.

Referenced by AbstractLock::ConditionObject::addConditionWaiter(), AbstractLock::ConditionObject::doNotify(), AbstractLock::ConditionObject::doNotifyAll(), AbstractLock::ConditionObject::getWaitingThreads(), AbstractLock::ConditionObject::getWaitQueueLength(), AbstractLock::ConditionObject::hasWaiters(), isShared(), and this().


The documentation for this class was generated from the following file:
Generated on Fri Nov 11 18:44:30 2005 for Mango by  doxygen 1.4.0