Austin Group Defect Tracker

Aardvark Mark III

Viewing Issue Simple Details Jump to Notes ] Issue History ] Print ]
ID Category Severity Type Date Submitted Last Update
0000755 [1003.1(2013)/Issue7+TC1] System Interfaces Editorial Clarification Requested 2013-09-21 00:42 2017-10-24 21:23
Reporter nsz View Status public  
Assigned To
Priority normal Resolution Accepted As Marked  
Status Interpretation Required  
Name Szabolcs Nagy
Organization musl libc
User Reference
Section pthread_mutex_lock
Page Number 1653
Line Number 53580-53581
Interp Status Approved
Final Accepted Text See Note: 0001875.
Summary 0000755: reused thread id and mutex ownership
Description If the owner thread of a locked mutex exits and later another thread is
created with the same thread id, then (I assume) the new thread does
not become the owner of the lock just because the thread id is reused.

So if the new thread (with the same id) uses pthread_mutex_lock or
pthread_mutex_unlock on the mutex then the semantics should be as if it
was not the owner (report error, deadlock, etc according to the type and
attributes of the mutex).

Is this the correct interpretation of mutex ownership in case of reused
thread ids?

This means that the implementation must have a way to determine lock
ownership other than just comparing the thread id (or mark the owned
mutexes after thread exit, or make sure that a thread id is not reused
if it owns a lock which has ownership dependent semantics)

As far as i can tell existing implementations (solaris, bsd, glibc, musl)
currently don't handle (or care about) this hypothetical corner-case.
(although such situation should be possible to observe on a system
with 32bit thread ids).
(errorchecking, recursive and robust mutexes are affected)

Desired Action say something about owned mutexes after their thread id is reused
Tags No tags attached.
Attached Files

- Relationships

-  Notes
shware_systems (reporter)
2013-09-21 05:10

It already says:
If mutex is a robust mutex and the owning thread terminated while holding
the mutex lock, a call to pthread_mutex_lock( ) may return the error value [EOWNERDEAD] even
if the process in which the owning thread resides has not terminated. In these cases, the mutex is
locked by the thread but the state it protects is marked as inconsistent. The application should
ensure that the state is made consistent for reuse and when that is complete call
pthread_mutex_consistent( ). If the application is unable to recover the state, it should unlock the
mutex without a prior call to pthread_mutex_consistent( ), after which the mutex is marked
permanently unusable.

This means that reusing the same mutex with any thread-id, even the old one, will return [EOWNERDEAD] initially and it's on the thread to make the state consistent, or inform another thread that handles that it's required. So it is part of the state, not dependent on a particular thread id. If that can't be done, then the pthread_mutex_unlock will cause [ENOTRECOVERABLE] to be returned on subsequent lock attempts.

What appears to be missing is if the lock is in inconsistent state, what error is returned if the new thread attempts a recursive lock before calling pthread_mutex_consistent or pthread_mutex_unlock first. I would assume still [EOWNERDEAD] but this isn't explicit.
dalias (reporter)
2013-09-21 05:31

The text you quoted pertains only to robust mutexes. It is irrelevant to this request for clarification.
shware_systems (reporter)
2013-09-21 06:08

"As far as i can tell existing implementations (solaris, bsd, glibc, musl)
currently don't handle (or care about) this hypothetical corner-case.
(although such situation should be possible to observe on a system
with 32bit thread ids).
(errorchecking, recursive and robust mutexes are affected)"

He included robust, so it partially applies... If extra language needed for non-robust then yes, that can be added.
dalias (reporter)
2013-09-21 06:30

That seems to have been a mistake. This issue does not apply to robust mutexes since they are automatically unlocked (well, changed to EOWNERDEAD state).
shware_systems (reporter)
2013-09-22 02:36

Suggested language for non-robust, in pthread_mutex_lock( ). I believe this shows the expectation of implementation behavior to match a reused thread_id being equivalent to a different thread_id, and second paragraph covers that it is more on the implementation than the application, but not required, to recover from orphaning a non-robust mutex. The Rationale addition makes explicit potential consequences of not unlocking a mutex before thread termination. I don't believe pthread_mutex_timedlock( ) needs changes as the basic 'block on already locked' covers it, and the timeout prevents deadlock.

After Page 1654, Line 53628, insert paragraphs:

If a mutex is not a robust mutex and the process containing the owning thread, or the owning thread in the current process, terminates without fully unlocking the mutex, a call to pthread_mutex_lock( ) using this mutex shall block until such time as the thread is canceled, and a call to pthread_mutex_trylock( ) shall return [EBUSY], even if the calling threads' thread_ID is the same as that used by the terminated thread.

It is implementation-defined what facilities are provided, if any, for how a non-robust mutex locked by an exiting thread after all cancellation cleanup handlers have been executed are to be made consistent and unlocked again for reuse by a subsequent thread, or for notifying a thread that another thread has been blocked by attempting to lock such a mutex. Such facilities, when provided, shall be documented by the implementation.

After Page 1655, Line 53685, insert:
For non-robust mutexes it is the application's responsibility to unlock the mutex fully before the owning thread exits, and to use also a cancellation cleanup handler to accomplish this when there is a chance the thread may be canceled for any reason, to prevent deadlocks. Implementation provided mutexes left locked after exit are effectively orphaned and may require a system reboot to reinitialize. A mutex initialized in process shared memory may be inaccessible until the shared memory is released and reallocated by some process, and this does not guarantee the original mutex pointer shall be the same so would have to be reacquired.
dalias (reporter)
2013-09-22 03:20

That's way too complicated. Assuming the intent is not to require implementations to deal with the thread id reuse issue, the added text should simply be:

If a thread is the owner of any mutex, rwlock, spinlock, or stdio FILE stream lock at the time it terminates, the behavior is undefined.
shware_systems (reporter)
2013-09-23 04:50

The intent is to declare that it is already normative that implementations are expected to behave that thread_id reuse is not an issue, and this is the defined behavior conforming implementations are expected to produce when a thread_id is reused with non-robust mutex types in the same situation as the preceding paragraph. The standard says if it is locked by one thread it will block when another tries to lock it (sentence 2 of description). That it's using the same id does not make it any less a different thread from the original locker. So saying it's undefined is less correct than what I have.

The rest discusses that there is a known downside to this expected behavior that some implementations may provide extensions to help alleviate, but these would most likely be slower than robust mutexes are already and are not portable. Still, they would serve a purpose so it's not improper to mention some expectations about them. The rationale addition is more explicit about what that downside is.
Don Cragun (manager)
2013-10-10 18:23

Interpretation response
The standard clearly states that a mutex is owned by the thread that locks it, and conforming implementations must conform to this.

If an implementation changes the owner of an existing mutex lock to a new thread at some time after the owning thread terminates, that implementation does not conform to the standard's requirements. The lock's owner is required to be the thread that created the lock; not a thread that happens to have the same thread ID as the thread that locked it, a thread that happens to be using the same slot in the system's thread table as the thread that locked it, or any other attribute of a thread that is not tied to the lifetime of the thread.

If a conforming implementation wants to use a resource (such as a thread ID) to determine a lock's owner, it must not reuse that resource until all locks that were held by the terminated thread that was using that resource have been released.

Notes to the Editor (not part of this interpretation):
ajosey (manager)
2014-02-21 15:39

Interpretation Proposed 21 Feb 2014
ajosey (manager)
2014-03-25 13:41

Interpretation Approved: 25 March 2014
torvald (reporter)
2014-10-07 11:50

I think the accepted resolution is not practical. Have you considered the consequences for implementations, in particular wrt. recursive locks? Why do we have robust locks if this doesn't yield semantics for normal locks that actually allow efficient implementations?

Furthermore, it is stated in Note: 1875 that a mutex is owned by the thread that locks it. So, if the thread terminates and thus does not exist any more as an entity, it also can't own a mutex, right? I agree that one may argue that no other new thread can own it either, but this kind of ambiguity indicates to me that it's best to do what was stated in Note: 1848: if a thread terminates that owns non-robust mutexes, you get undefined behavior.
torvald (reporter)
2016-09-09 14:59


Have you considered the consequences for implementations? If so, what implementation approaches did you consider, and how did you assess the runtime costs? For example, having to look up a separate thread ID on another cache line every time one tries to acquire a recursive lock is not for free in terms of runtime overhead. This ID should also be large enough to never overflow, so 32b is probably not sufficient.
Keeping a list of all locks a thread has acquired is obviously not efficient enough.

Currently, I don't see a way to implement this requirement in a way that does not add significant overhead to recursive mutexes. Therefore, I'm not planning to implement this requirement in glibc; I'd much rather ignore this requirement than make recursive mutexes slower for everyone, because the latter would drive people away from POSIX facilities.
carlos (reporter)
2017-10-24 21:23

Has there been any status update here and consideration given to the points made by Torvald Riegel and Rich Felker?

- Issue History
Date Modified Username Field Change
2013-09-21 00:42 nsz New Issue
2013-09-21 00:42 nsz Name => Szabolcs Nagy
2013-09-21 00:42 nsz Organization => musl libc
2013-09-21 00:42 nsz Section => pthread_mutex_lock
2013-09-21 00:42 nsz Page Number => 0
2013-09-21 00:42 nsz Line Number => 0
2013-09-21 05:10 shware_systems Note Added: 0001842
2013-09-21 05:31 dalias Note Added: 0001843
2013-09-21 06:08 shware_systems Note Added: 0001845
2013-09-21 06:30 dalias Note Added: 0001846
2013-09-22 02:36 shware_systems Note Added: 0001847
2013-09-22 03:20 dalias Note Added: 0001848
2013-09-23 04:50 shware_systems Note Added: 0001849
2013-10-10 18:23 Don Cragun Page Number 0 => 1653
2013-10-10 18:23 Don Cragun Line Number 0 => 53580-53581
2013-10-10 18:23 Don Cragun Interp Status => ---
2013-10-10 18:23 Don Cragun Note Added: 0001875
2013-10-10 18:23 Don Cragun Status New => Under Review
2013-10-10 18:25 Don Cragun Final Accepted Text => See Note: 0001875.
2013-10-17 15:10 Don Cragun Status Under Review => Interpretation Required
2013-10-17 15:10 Don Cragun Resolution Open => Accepted As Marked
2013-10-17 15:10 Don Cragun Interp Status --- => Pending
2014-02-21 15:39 ajosey Interp Status Pending => Proposed
2014-02-21 15:39 ajosey Note Added: 0002155
2014-03-25 13:41 ajosey Interp Status Proposed => Approved
2014-03-25 13:41 ajosey Note Added: 0002196
2014-10-07 11:50 torvald Note Added: 0002408
2016-09-09 14:59 torvald Note Added: 0003375
2017-01-05 19:50 torvald Issue Monitored: torvald
2017-10-24 21:23 carlos Note Added: 0003870

Mantis 1.1.6[^]
Copyright © 2000 - 2008 Mantis Group
Powered by Mantis Bugtracker