Austin Group Defect Tracker

Aardvark Mark IV


Viewing Issue Simple Details Jump to Notes ] Issue History ] Print ]
ID Category Severity Type Date Submitted Last Update
0000741 [1003.1(2013)/Issue7+TC1] Base Definitions and Headers Editorial Enhancement Request 2013-08-24 17:04 2020-03-25 15:51
Reporter steffen View Status public  
Assigned To
Priority normal Resolution Accepted As Marked  
Status Applied  
Name Steffen Nurpmeso
Organization
User Reference
Section XBD, 13. Headers, signal.h
Page Number 333
Line Number 11131
Interp Status ---
Final Accepted Text See Note: 0001834
Summary 0000741: Add a NSIG constant (or, alternatively, SIGMAX)
Description POSIX does not yet offer the possibility to query the (minimum and) maximum values of signal constants. It does do so for the real-time signals, through the SIGRTMIN and SIGRTMAX range (of at least RTSIG_MAX size).

Traditionally systems define the necessary constants, named either NSIG (or, less common, SIGMAX).

The current situation is very unfortunate since complicated tests have to be written in case sufficiently spaced arrays for signal handlers etc. are necessary -- with the absolute fallback being a dance over possible signal constants to query the maximum value used.

Note this issue is also a bit related to a thread on the mailing list.
(Since those archives have again been moved, and i fail to see a possibility to get at the normal sequence number, i use the GMANE one, in [1].)

 [1] http://article.gmane.org/gmane.comp.standards.posix.austin.general/7822 [^]
Desired Action Because of the very widespread availability of the NSIG constant:

add, on page 333, after line 11130, the new paragraph:

[CX]The <signal.h> header shall declare the NSIG macro, which shall expand to the largest integer constant expression assigned to any of the signals that the implementation supports, plus 1.[/CX]
Tags issue8
Attached Files

- Relationships

-  Notes
(0001755)
shware_systems (reporter)
2013-08-24 21:35

I believe this is a duplicate of a pre-Issue 7 bug report, or part of one, that was rejected, as I remember some sort of discussion about it.

"The <signal.h> header shall define the following macros that are used to refer to the signals that
occur in the system. Signals defined here begin with the letters SIG followed by an uppercase
letter. The macros shall expand to positive integer constant expressions with type int and
distinct values. The value 0 is reserved for use as the null signal (see kill( )). Additional
implementation-defined signals may occur in the system."

For ease of implementation one might assume that the SIGXXX constants would also be a contiguous range, as is required of the SIGRT range, starting from 1, but there is no language preventing an implementation from assigning them from INT_MAX down towards 0 either and picking arbitrary non-contiguous values; as in 2^n bitmask positions for direct and'ing and or'ing of a sigset_t field. The language also doesn't restrict implementation-defined values or the SIGRT range from using negative values; just the values the standard defines must be positive. INT_MAX + 1 overflows to INT_MIN in that scenario, on most architectures, posibly conflicting with INT_MIN may be the bottom of the SIGRT range or other signal numbers. Because of this providing facilities for application-defined signal numbers that don't conflict with the implementation-defined values or the SIGRT range is more a quality of implementation issue, which NSIG or SIGMAX might then be useful for. Because of that possible overflow situation an NSIG isn't necessarily portable, however.

I believe applications are expected to do their own allocations of numbers only out of the SIGRT range, whether real time behavior is required, or not, of those signals, rather than the standard defining a real time specific range and non-real time compatible range. That SIGRTMIN and SIGRTMAX aren't limited to being constants leaves room an implementation might implement a dynamic range rather than a fixed one, but again that's a quality of implementation issue as that has to avoid how the required signal numbers are arbitrarily assigned.

The sigset_t type and interfaces it's used with are supposed to opaquely hide whatever allocation scheme the implementation does use for signal number values. I believe it is expected portable applications will use those interfaces to manage what allocations are needed and use the sigismember( ) interface for usage tracking. This may require using linked lists rather than static arrays for that ancillary data dance, but as I remember keeping the allocation scheme flexible outweighed that consideration from a backwards compatibility standpoint.
(0001758)
geoffclare (manager)
2013-08-28 09:00

On Solaris systems the definitions of NSIG and SIGMAX are within a #if
that checks for the __EXTENSIONS__ feature-test macro, and has this
comment:

/*
 * use of these symbols by applications is injurious
 * to binary compatibility
 */

It is easy to see why that comment is there: on Solaris 10, NSIG is 49
and MAXSIG is 48. On Solaris 11, NSIG is 73 and MAXSIG is 72. If an
application uses NSIG or MAXSIG and is compiled on Solaris 10 but run
on Solaris 11, it will be using the wrong value.

In XPG2, signal.h was required to define NSIG. I imagine that the
original POSIX developers anticipated that some systems would need
to increase the number of signals in future releases and that is
why (perhaps with other reasons) they decided not to include NSIG
in POSIX.1-1988.

If we want to provide some means of discovering the highest valid
signal number, it should not be a compile-time constant.
(0001760)
steffen (reporter)
2013-08-28 10:28
edited on: 2013-08-28 22:06

Sorry, i didn't know about XPG2; the oldest document i use is IEEE Std 1003.1, 1996 Edition.

But i strongly disagree to your statement "It is easy to see why"; normally such issues are handled by binary compatibility layers in the operating system kernel. Well i don't have Solaris around, because the download area even asks for the colour of my underwear (which i necessarily fail to answer), but ... looking at [1], i see.

 [1] https://hg.openindiana.org/upstream/illumos/illumos-gate/file/74a59768760e/usr/src/head/signal.h [^]

Well, this is bad design, *given the history of NSIG*, *and if developers have portability in mind*. These guys don't seem to take care for developers which have to write portable applications, and isn't that the reason for why there is POSIX? So here we don't have "yes, we can"; can we? (Sorry for the rhetorical blah, there are upcoming elections in Germany.)

Here we can see some snippet ([2]) of a language i hate but many people love, and how it deals with the problem:

 [2] http://www6.uniovi.es/python/dev/src/c/html/signalmodule_8c-source.html [^]

00024 #ifndef NSIG
00025 # if defined(_NSIG)
00026 # define NSIG _NSIG /* For BSD/SysV */
00027 # elif defined(_SIGMAX)
00028 # define NSIG (_SIGMAX + 1) /* For QNX */
00029 # elif defined(SIGMAX)
00030 # define NSIG (SIGMAX + 1) /* For djgpp */
00031 # else
00032 # define NSIG 64 /* Use a reasonable default value */
00033 # endif
00034 #endif

Ouch (ouch ouch). And i really think this is exemplary.

Thinking about it, i rather assume that these "dynamic symbols" have something to do with a "ripoff" mechanism to make some signals available to some "systemish" library, so as to avoid user programs use signals for their own purpose.

But then, `NSIG' should exclude those from a normal application's view.
And POSIX should really think about offering a facility that can *regulary* be used to manage (ripoff) signals, so that *normal* libraries can register a realtime signal for their exclusive use.
Imo `NSIG' hast to be a constant, usable on the preprocessor level.

(0001763)
dalias (reporter)
2013-08-28 17:20

I disagree with note # 0001758. The existence of sigset_t gives a hard upper bound on the number of signals an implementation can support at runtime for a given build time environment/ABI. If implementors anticipate increasing the number of signals beyond what's currently supported, but still within the bounds of sigset_t's size, it makes sense to just define NSIG as 8*sizeof(sigset_t)+1. This may result in applications wasting a small amount of memory, but the large sigset_t is already wasting memory on such implementations anyway.

If requiring NSIG is really not acceptable, it could be made optional at compiletime, in which case sysconf(_SC_NSIG) would be needed to determine the limit at runtime. However, in my opinion, a decent portion of the potential usage cases are at compiletime, so this would just lead to applications having to do things like:

#ifndef NSIG
#define NSIG (8*sizeof(sigset_t)+1)
#endif
(0001764)
Don Cragun (manager)
2013-08-28 22:10
edited on: 2013-08-28 22:12

I edited the bug Description and Note: 0001760 removing the < and > characters from the links to external web pages in response to 0000742

(0001766)
geoffclare (manager)
2013-08-29 08:23
edited on: 2013-08-29 08:36

(response to Note: 0001763)

On Solaris 10 and 11, sizeof(sigset_t) is 16. I assume there is a
good reason why the Solaris developers chose to define NSIG as 49 and
73, respectively, and document the binary compatibility problem
this causes rather than eliminating it by defining NSIG as 129.

You suggest that applications which want a compile-time constant
could use:

#ifndef NSIG
#define NSIG (8*sizeof(sigset_t)+1)
#endif

Since NSIG has binary compatibility problems, such applications
would actually be better off using:

#undef NSIG
#define NSIG (8*sizeof(sigset_t)+1)

(0001770)
steffen (reporter)
2013-08-29 11:17

The Solaris wording would also permit adjustment of NSIG according to an updated POSIX, valid starting with the release which states to comply to that newly crafted standard, and onwards.
(0001780)
shware_systems (reporter)
2013-08-30 08:53

Re: #1760
> But i strongly disagree to your statement "It is easy to see why"; normally such issues are handled by binary compatibility layers in the operating system kernel.

It's not necessarily obvious, but using that signal( ) reimplementation as example:
00073 static struct {
00074 int tripped;
00075 PyObject *func;
00076 } Handlers[NSIG];
00078 static int is_tripped = 0; /* Speed up sigcheck() when none tripped */
00079
00112 static void signal_handler(int sig_num)
00114 { // does not have range limit check on sig_num, assumes always valid
00125 is_tripped++;
00126 Handlers[sig_num].tripped = 1;
00141 }
When that's compiled on Solaris 10 with its NSIG, but run on Solaris 11, if signal_handler(50) called
(Handlers[50].tripped = 1) overwrites is_tripped, unless bounds checking compiled into the release version on top of what range checks are used elsewhere. That's not something a binary compatibility layer can easily guard against, I think.

Re: #1763
> If requiring NSIG is really not acceptable, it could be made optional at compiletime, in which case sysconf(_SC_NSIG) would be needed to determine the limit at runtime.

Here I agree with you, it is a sysconf( ) candidate, but something portable can be written based off that return value with only a minor increase in memory usage over a static allocation, by use of malloc( ) instead. Optional at compile time would then be more for legacy support for systems that don't support sysconf( ) at all. It would also be more binary portable and avoid possible overwrites like above, and can get rid of all those version check #ifdefs. Well, be a case to check first, anyways. I'd split it into _SC_NSIG_USED and _SC_NSIG_AVAIL, and maybe _SC_SIGRT_USED & AVAIL, to cover sigset_t like Solaris that statically reserves more than it uses.
Some sort of unused signal number registration & unregister interfaces would still need a sample implementation before it could be considered an addition to the standard, I think. These would also need a generic way of updating the strsignal( ) database for various locales so isn't trivial. Look at all the #ifdefs used just to create a dictionary for one locale in that python code, as an example.

XRAT B.2.4 indicates the signum symbols were expected to be used mostly as case labels, not array indexes or in for/while loops. An interface like int sigindex(int n), where n would be from 1 to NSIG_USED, could return actual signum allocated on a system that wasn't using contiguous allocations, and should be pretty trivial on current systems to add so it can be a legitimate enhancement request. I think that's more what's missing here than NSIG and SIGMAX macros. This would also have a degree of binary portability and allow for loops to not worry as much about what is being used in interface calls.
A complementary int signumidx(int signum) may be needed in handlers, but again that's fairly trivial with contiguous allocations, and keeps the flexibility currently present that supports variations as mentioned in XRAT B.2.4+. At most, for now, I think INT_MIN could be excluded from being a legitimate signum value, to help ensure more backwards compatibility, but that's about it. Btw, Solaris 11 shows that XRAT's expectation systems wouldn't be using more than 64 signals seems outdated. (p.3537, l.119701ff)
(0001819)
shware_systems (reporter)
2013-09-06 06:14

Some further thoughts, in light of the 5 Sep 13 phone discussion on this:

> From: Rich Felker <dalias@aerifal.cx>
> > On Wed, Aug 28, 2013 at 07:52:10AM +0000, Schwarz, Konrad wrote:
> >
> > This assumes that sigset_t is a bit vector. However,
> > sigset_t is specified opaquely.
> No, it's a pure counting argument. There is no way you can store more
> state than 8*sizeof(T) on/off flags in an object of type T. The
> representation used in that object is irrelevant.

What happens when sigset_t is just a pointer to a linked list of records, or a class object reference, if the implementation coded in C++? I realize it's not recommended, but sigset_t is supposed to be small enough to pass by value, or embedded in other records without bloating those records considerably, and a void * will fit that goal. A bitmap might not, however containerized, even though the assertion it represents the maximum one container can nominally hold is accurate. When a range gets broad enough, often something that handles sparse sets is less memory intensive. Some compilers don't pass arrays or records by value anyways, or above a certain size; if the semantics allow it they still convert the value reference to a temp malloc( ) pointer and put in a function epilogue that free( )s it, so you might get the speed penalty warned about in XRAT anyways. This would limit NSIG to 8*sizeof(void *)+1, but SIGRTMAX - SIGRTMIN could be well over 16K on a larger system and a list or object would handle that, but sizeof(sigset_t) would be of little use for determining an accurate value for NSIG... Portability issues arise with sigset_t when it's NOT considered opaque and applications are accessing its internal structure directly rather than letting the implementation handle that; another potential issue discussed in XRAT and during the call.

That current implementations have kept things simple by only doing a static range shouldn't preclude that someone inventive might bite the intellectual bullet and implement a dynamic range, or ranges, in a nominally portable fashion that handles the extra complexity involved. It may not be trivial, but there are examples that I feel can be adapted to make it practically usable by applications.

A sigset_t limited to an array or record typedef as only a bitmap container that would allow a compile time NSIG define like that proposed, as a plausibly reliable, portable, and still flexible enough to support dynamic ranges, it seems, still appears overly impractical given the other considerations. During the phone call the general consensus was this flexibility should still be maintained, as I heard it, but the fairly widespread implementation specific usage of NSIG did make it a candidate for sysconf( ) support in some fashion. It was also brought up during the phone call that an alternative NSIG definition was plausible that would be consistent with current practice, but wording for it wasn't specified so I'm taking a stab at it, requirements wise, and what's plausible for sysconf( ) in that context. Actual specific language I expect will be discussed in another conference call session.

-------------------------------------------
At compile time the only thing the standard guarantees will be available are the signals it defines, and like other runtime variable compile time defines, those are minimums that sysconf( ) provides implementation-defined larger values for. For Issue 7, that would limit an NSIG_MAX to, if I counted correctly, 28 + RTSIG_MAX as the symbolic minimum all implementations support for all applications that could be added portably to <limits.h>, using NSIG_MIN_USED and NSIG_MAX_AVAIL as the symbolic constants that have a sysconf( ) counterpart, but as it would be _POSIX_VERSION sensitive as to availability and a new feature, I think it would have to wait until Issue 8 to be added.

The header <unistd.h> would add an _XOPEN_NSIGS and _POSIX_NSIGS as default symbolic #defines for that 28 value, to leave open that XSI systems may use more than base POSIX systems in future versions, and one of those symbolic constants would be the default minimum value sysconf(_SC_NSIG_MIN_USED) would be expected to return. A sysconf(_SC_NSIG_MAX_AVAIL) would return RTSIG_MAX by default, and it would be implementation-defined to return larger if dynamic ranges were supported or more bits reserved in sigset_t than needed, with the sum being the value in <limits.h> NSIG_MAX #define'd to, e.g.
#define NSIG_MIN_USED <value of sysconf(_SC_NSIG_MIN_USED)>
#define NSIG_MAX_AVAIL <value of sysconf(_SC_NSIG_MAX_AVAIL)>
#define NSIG_MAX (NSIG_MIN_USED + NSIG_MAX_AVAIL)

Additional language that this is just a count, not a zero based range bound, would be desirable, with the caveat some implementations may allow it to be used as both, as well as a clarification always RTSIG_MAX <= NSIG_MAX_AVAIL. A further requirement might be that for a given _POSIX_VERSION the NSIG_MAX sum shall not change between implementation versions without an addendum to its conformance document and relevant user documentation, as explained below.
-------------------------------------------

Defined like this, NSIG_MAX or the NSIG_MIN_USED value would be what legacy applications could rely on as a suitable value to #define NSIG from for legacy style implementations. Which value would be more appropriate would be application specific. For implementations too old to support the RTSIG_MAX constant, NSIG = NSIG_MAX = NSIG_MIN_USED and NSIG_MAX_AVAIL would be zero, I expect, for a cross back compile. For Solaris 11.5, as example, NSIG_MAX would be the 129 its sizeof(sigset_t)+1 reserves, because it happens to be a bitmap container, and the NSIG_MIN_USED value would be the 73, most likely, as count(signums). If back-ported to Solaris 10 NSIG_MIN_USED would be 49 and NSIG_MAX_AVAIL would be 80 so NSIG_MAX still 129.

I believe that covers existing implementations and would be forward compatible with implementations that also define sigset_t staticly and signums contiguously. It does not address binary compatibility issues directly, but something like that python code would use NSIG_MAX to define the array, and NSIG_MAX_AVAIL could be used to set "always ignore" defaults in a
for( i=NSIG_MAX, j=NSIG_MAX_AVAIL; j>0; i--, j-- ) {. . .}
loop to avoid the overwrite issue for a while. Changing the NSIG_MAX sum would require code like this to be recompiled to avoid them, so the documentation requirement would signal application developers it was needed, but as long as no SIGXXX values changed, just were added to, no code changes should be required to support the signals it does expect.

Newer applications would be expected to use the 4 _SC_NSIG values suggested in Note #1780, as a dynamic range implementation might return a fixed incremental delta value for the _SC_NSIG_AVAIL constant rather than an absolute ceiling and other considerations might affect the other 3 return values when an application is run. I'm leaving open the possibility RTSIG_MAX might actually be more of a RTSIG_MIN_USABLE value and the mapping interfaces described might be done also. Whether these would be added at the same time as above, or after an actual implementation uses them as representative example that their use only with malloc( ) is practical, needs discussing also.
(0001820)
dalias (reporter)
2013-09-06 06:21

Regarding: "What happens when sigset_t is just a pointer to a linked list of records...?"

Such an implementation cannot meet the interface requirements for sigset_t. For example, there is no provision that this type not be assignable. The entire state must fit into the object itself, and that yields a limit of 8*sizeof(sigset_t) signals.
(0001822)
geoffclare (manager)
2013-09-06 09:13

My understanding of the consensus decision reached in the Sept 5
meeting is that we should add sysconf(_SC_NSIG), i.e. the _SC_NSIG
constant in <unistd.h> and a corresponding row in the table on the
sysconf() page. This would provide applications with a means to
obtain the number they have been getting from the non-standard NSIG
constant (or the number they actually wanted, which might differ from
NSIG if the application was compiled on an old system).

The suggestions in Note: 0001819 seem needlessly complicated to me.
(0001823)
shware_systems (reporter)
2013-09-06 09:13

That provision is partly implied by the requirement sigfill/emptyset( ) be used to initialize a sigset_t instance, and that sigaddset( ) and sigdelset( ) are provided to modify the structure, not that it be initialized by assignment from another instance of sigset_t directly or via a constant initialization declaration. You may be right that this is not enough exclusion and something like a sigcpyset( ) might be needed to make what I discussed robust, from the application usage perspective, so the exclusion could be explicitly added. The strongest contra-indication I saw for a pointer being usable as part of a sigset_t structure definition was that the Rationale of sigemptyset( ) says it is not intended for dynamic allocations, but this is not normative, nor does it indicate what allocations might be dynamic that its not intended for. In looking over the interfaces that reference sigset_t directly or indirectly as an output, albeit not exhaustively, I did not see where any direct assignments were required from the implementation perspective. The few places where a sigset_t is set with values by the interface could be implemented as a sigemptyset(dstset) followed by a sigismember(srcset)/sigaddset(dstset) loop that would hide any pointer dereferencing.

I did notice, in the process, that Issue 7 had changed the requirement on SIGRTMIN and MAX to be positive only, so saying they could be negative now was in error, but I believe implementation-defined signal numbers still aren't restricted that way.
(0001824)
shware_systems (reporter)
2013-09-06 10:02
edited on: 2013-09-06 10:54

"The suggestions in Note: 0001819 seem needlessly complicated to me."

I'd prefer simpler also, and I did say the consensus was only some sort of sysconf support be added, but it was brought up a #define NSIG not based on sigset_t might be plausibly portable. My intent was that an OS may have version updates between standard updates and splitting it up like that provides, for some implementations, some degree of binary forward compatibility based just on compile time defines that still could be used to declare arrays as statics or externs, not only as mallocs( ), and I believe the added complexity accomplishes that. The simpler way forces existing applications to be refactored to use malloc( ) rather than just adding a #define NSIG NSIG_MAX or #define NSIG NSIG_MIN_USED to the source to possibly get equivalent behavior as on a legacy system, with suitable #ifdefs added.

The example provided shows how the three could be used to provide that binary compatibility where just one or the other might be insufficient, with a bit of recoding, but as it was getting long enough I didn't get into all the ramifications of why the language is there. I also expect more language will be needed to guarantee the conditions of portability but that would be part of the official changes. This is more a precis that something is possible.

(0001825)
steffen (reporter)
2013-09-06 11:44

Note: 0001822:

 add sysconf(_SC_NSIG), i.e. the _SC_NSIG
 constant in <unistd.h> and a corresponding row in the table on the
 sysconf() page. This would provide applications with a means to
 obtain the number they have been getting from the non-standard NSIG
 constant (or the number they actually wanted, which might differ from
 NSIG if the application was compiled on an old system).

This is a decision i don't understand.
The desired change doesn't address the issue of runtime dynamics, thus requiring _SC_NSIG to be a constant for the lifetime of a program.

I don't understand why the core doesn't address this in a double tracked way.
Provide a new _SC_NSIG sysconf(3) variable that can be used to portably create sufficiently sized data storage, for, e.g., an array of signal function pointers, but, in order not to break software that is in use since decades, also define NSIG as the maximum possible constant that is to be expected.

In respect to this it has to be noted that binary compatibility for different _SC_NSIG values cannot be guaranteed because of the nature of sigset_t, i.e., _SC_NSIG can only differ in a limited range, namely the maximum number of distinct values, thus bits, that can be stored in a sigset_t. According to the RATIONALE of sigemptyset():

  This function is not intended for dynamic allocation.

Note that this rationale also gives clear instructions to preserve binary compatibility.

Because of this i personally object against this core decision.
It is possible to define a C preprocessor evaluable NSIG constant, that is the way this constant has been used for decades, and if there is desire to add a _SC_NSIG that truly states the effective number of signals that are in use in the currently booted kernel/userland combination, then that is not a bad decision (though i personally would, then, favour atomic sigaddset() and sigismember(), because sigset_t is of course always sufficiently spaced to store all possible signals).
(0001826)
jilles (reporter)
2013-09-06 12:27

FreeBSD's NSIG constant only covers "old" signals. It is defined as 32. Realtime signals are outside this range. Any new non-realtime signals will probably also be outside this range, since there may be binary compatibility issues with changing NSIG.
(0001827)
steffen (reporter)
2013-09-06 14:00

Yes, different to Solaris the NSIG of FreeBSD doesn't include system-library or language-runtime specific signals.
And of course, these are beyond the knowledge of a normal userspace program -- and should be! But then -- why should such a signal be in NSIG?
(Indeed hard to believe that such signals are *really* included in some NSIG constant. I'm pretty sure that some programs out there would first puke then bite the bullet if suddenly a Java virtual machine signal would come along.)

However, a userspace program that wants to handle a lot of job control, interruption and termination signals cannot simply say "32" because POSIX defines more than 32 possible signals in signal.h.
So how can a user program portably deal with that?
There are peculiar operating systems out there, how can i deal with that from my pathetic point of view? Maybe i don't wanna maintain my program for a year or two, but still want it to work.

SIGTERM can be 35. SIGHUP can be 33. I just cannot use "32"; i possibly could use "64", people are doing this; no, i use NSIG and feel free.
I could imagine to adjust this to POSIX_NSIG or whatever, but no more.

P.S.: hard to believe that FreeBSD will ever introduce a new "old" signal; but then, i hope it would adjust NSIG.

P.P.S.: i once had written a very small program that i'd compiled under Linux (2.0.?), and it simply (due to source code loss or so) ran successfully under FreeBSD 4.7-9 and FreeBSD 5.3 for many years (almost a decade).
(0001828)
steffen (reporter)
2013-09-06 14:11

I.e., to clarify my intention: imho the standard has an obligation to provide a sane environment, which we currently do not have. I know that some very very large C library includes realtime signals in NSIG. I didn't know that Solaris includes Java VM traps in there. But i do know that a lot of programs out there do either assume (obviously wrong) conditions or implement incredible costly configuration- or compile-time checks to get to a sane value they can use in place of NSIG. The intention of this issue was to push forward a solution, but unfortunately i've underrated that this time the request was treated unfiltered -- if i would have known that, i would have emphasized SIGMAX or so more.
Thank you.
(0001829)
steffen (reporter)
2013-09-06 14:54

On the mailing list we've had:

Geoff Clare <gwc@opengroup.org> wrote:
 |> (0001825) steffen (reporter) - 2013-09-06 11:44
 |> http://austingroupbugs.net/view.php?id=741#c1825 [^]
 [.]
 |2. The existing non-standard NSIG constant has a different meaning,
 |on the systems that provide it, from what you are suggesting.

It surely was a fiddling and unclear description at the outset, i see that now.
It may have a fuzzy meaning (no, indeed it *does* have), and it is possibly even used just the same way, just as is necessary at the moment it is used...

It would then be the best to define two standard preprocessor constants in addition to the _SC_NSIG that core has already agreed to offer.
I.e., one which can be used to deal only with POSIX defined job control etc. signals, and one which loosely describes the absolute maximum that could in theory be used in a sigset_t, thus also including implementation specifics and realtime signals (avoiding the 8*sizeof(sigset_t) calculation that would have the same effect). That way the purpose of a declaration would be clear from a glance, as in:

  int seen_sig[NSIGNORM];
  int seen_sig[NSIGMAX];
(0001830)
shware_systems (reporter)
2013-09-06 17:32

Re: #1827
I used NSIG_MAX and NSIG_MIN_USED instead of POSIX_NSIG *shrug* The premise was to handle a fixed amount of signals because no one does dynamic in a portable way... To use your term, all the "ripoffs" are private to specific implementations or applications. The nucleus of a portable dynamic signal "ripoff" support is provided by the 4 sysconf( ) values _SC_NSIG_USED, _SC_NSIG_AVAIL, _SC_SIGRT_USED and _SC_SIGRT_AVAIL I proposed in Note #1780. These are expected to be called multiple times in an application, not just at initialization, and used with interfaces that do manage "ripoffs" so conflicts with other running applications are minimized. They do not have a #define counterpart because those would be minimums, not the maximums you want them to be so you could use what's "left over". The default SIGMAX for a dynamic implementation is INT_MAX, in other words.

Those managing interfaces have yet to be specified by any implementation, and the standard can only leave them as possibilities until someone actually does design and implement them. I don't have an implementation in running order yet that I could do it. What the standard can do unilaterally is add on to something already standardized with a defined growth path, like the *conf( ) interfaces, and that's what's being hashed out.

Once changes like this are made the next step is complain on various implementation's developer lists that this new feature will be available so could you put some sort of interface set together that builds off this as an extension to their libc or as a separate lib that only depends on the standard ones. Once that gets done then it might come back here and be accepted to do all that you're hoping for as a conformance matter.
(0001831)
geoffclare (manager)
2013-09-11 11:45

(response to Note: 0001829)

I would not object to adding an NSIGMAX constant which gives the
maximum possible value that sysconf(_SC_NSIG) can return. To make
it usable (i.e. disallow NSIGMAX == INT_MAX) we should tie it to
sigset_t somehow. Something like "the number of signals that
could be present in a completely filled sigset_t signal set if
any restrictions imposed by sigaddset() were removed, plus one
if that set does not include signal number 0".

I don't see how we could add NSIGNORM without, as a side-effect,
bringing in unreasonable requirements on the ordering of signal
numbers (i.e that those which are not "NORM" signals come after
those which are).
(0001832)
steffen (reporter)
2013-09-11 12:43

Note: 0001831:
 I don't see how we could add NSIGNORM without, as a side-effect,
 bringing in unreasonable requirements on the ordering of signal
 numbers (i.e that those which are not "NORM" signals come after
 those which are)

POSIX could avoid the definition of any requirement.
'Just define that `NSIGMAX' is the maximum to be expected without breaking binary compatibility, and `NSIGNORM' (or `NSIGSTD' or similar) would mean the same regarding an implementation-specific subset that however includes the signals that POSIX defines by definition.

This would give applications the option to truly choose according to their needs. I.e., if an application is only interested in the POSIX standard signals, why should it pay in (time and) space for realtime signals? E.g., in the era of micro microcontrollers it does make a difference wether i need a handler array of [256] or, say, [32]; on a list i'm tracking someone noted today a 16 MHz controller with 32 KB memory, of which 4 KB are reserved for the bootloader...

Offering an additional `NSIGSTD' would give implementations the possibility for yet another point in their quality-of-implementation record.
And the best about this is that practically all implementations can make this point already today, and without any effort.

I could try to phrase the changes accordingly, if necessary.
(0001833)
steffen (reporter)
2013-09-12 11:41

I'm not dealing with _SC_NSIG here, as in

  On page 444, add, after line 15055

    _SC_NSIG

because i fail to see how *i* could integrate the necessary further change for sysconf(3) somewhere on the pages 2077-2079, because it seems to me adding _SC_NSIG would impose the necessity to offer signal related facilities in either of limits.h or unistd.h, presumably the former. (Or the semantics of sysconf(3) had to be changed?)

The updated desired action from my side would thus read:

Add, on page 333, after line 11130, the new paragraph:

 [CX]The <signal.h> header shall declare the NSIGMAX and NSIGPOSIX macros, which shall expand to positive constant integer expressions. These macros specify maximum possible signal numbers, counting 0, that the implementation is willing to reserve without breaking binary compatibility due to changes of the sigset_t type (see the RATIONALE of sigemptyset()). NSIGMAX shall specify the corresponding maximum that the sigset_t type is capable to represent, thus including the POSIX defined signals, the signals in the range SIGRTMIN and SIGRTMAX, as defined above, that are reserved for application use, as well as possibly existent additional implementation-defined signals. NSIGPOSIX shall specify a maximum value that only guarantees to cover possible binary incompatibilities regarding the POSIX defined signals. It is unspecified wether these constants have identical values.[/CX]
(0001834)
nick (manager)
2013-09-12 16:43
edited on: 2013-09-19 19:08

Proposed changes:

in <limits.h> page 282 after line 9359, add:
{NSIG_MAX}
Maximum possible return value of sysconf(_SC_NSIG). See [cross-ref to XSH sysconf()].
The value of {NSIG_MAX} shall be no greater than the number of signals that the sigset_t type (see [cross-ref to <signal.h>]) is capable of representing, ignoring any restrictions imposed by sigfillset() or sigaddset().


Add to RATIONALE on P283, before L9380:
{NSIG_MAX}
Some historical implementations provided compile-time constants NSIG or SIGMAX to define the maximum number of signals the implementation supported, but these values did not neccessarily reflect the number of signals that could be handled using a sigset_t. With the addition of real-time signals and the desire by some applications to be able to allocate additional real-time signals at run-time, neither of these constants provided a useable, portable value. NSIG_MAX was added to the standard to allow applications to determine the maximum number of signals that an implementation will support based on the size of the sigset_t type (defined in <signal.h>).


in <signal.h> page 332 line 11083 after:
The range SIGRTMIN through SIGRTMAX inclusive shall include at least {RTSIG_MAX} signal numbers.
append:
The value of SIGRTMAX shall be less than the value returned by sysconf(_SC_NSIG).


in <signal.h> page 332 line 11087 change:
The macros shall expand to positive integer constant expressions with type int and distinct values.
to:
The macros shall expand to positive integer constant expressions with type int and distinct values [CX]less than the value of {NSIG_MAX} defined in <limits.h>[/CX].


in <unistd.h> page 444 after line 15055 add:
_SC_NSIG


in sysconf() [XSH] to the table between page 2077 and 2079 add the following after line 66303 (_SC_MQ_PRIO_MAX):
  Highest supported signal number +1      _SC_NSIG


(0001837)
shware_systems (reporter)
2013-09-19 17:28
edited on: 2013-09-19 18:04

I still think the Rationale should indicate these are relevant only for a sigset_t that handles a static amount of signal numbers, as has been historical practice, not a dynamically sizing one. Then NSIG_MAX is the default value supported but the return value of sysconf(_SC_NSIG) may become larger than NSIG_MAX as application specific signals are allocated. It also should be explicit that the value returned by sysconf(_SC_NSIG) is the number of signals less than NSIG_MAX that particular implementation has defined SIGXXX constants for plus the SIGRT range at application startup time, and any application specific allocations done in the implementation-defined manner outside the SIGRT range shall increase that return value.

(0002642)
mirabilos (reporter)
2015-04-29 14:55

As a shell author, I require to be able to know the maximum number of signals the shell binary can possibly encounter in its lifetime, even after an OS upgrade. Thus, I used to expect that I can do (simplified by not taking the underscore versions into account)…

#ifndef NSIG
#define NSIG (SIGMAX + 1)
#endif
struct foo sigfoo[NSIG];

… and then access sigfoo[1] up to sigfoo[NSIG - 1], and that the OS signals are no smaller than 1 and no larger than (NSIG - 1).

From the currently proposed changes, I would use NSIGMAX as the size of the array, and then *possibly* use sysconf(_SC_NSIG) at run time, if a smaller number (e.g. those shown to the user with “kill -l”) can/should be used. For this, sysconf(_SC_NSIG) must always be > 1 and <= NSIGMAX.

If that is so, I can live with this.

- Issue History
Date Modified Username Field Change
2013-08-24 17:04 steffen New Issue
2013-08-24 17:04 steffen Name => Steffen Nurpmeso
2013-08-24 17:04 steffen Section => XBD, 13. Headers, signal.h
2013-08-24 17:04 steffen Page Number => 333
2013-08-24 17:04 steffen Line Number => 11131
2013-08-24 21:35 shware_systems Note Added: 0001755
2013-08-28 09:00 geoffclare Note Added: 0001758
2013-08-28 10:28 steffen Note Added: 0001760
2013-08-28 17:20 dalias Note Added: 0001763
2013-08-28 22:06 Don Cragun Note Edited: 0001760
2013-08-28 22:10 Don Cragun Interp Status => ---
2013-08-28 22:10 Don Cragun Note Added: 0001764
2013-08-28 22:10 Don Cragun Description Updated
2013-08-28 22:12 Don Cragun Note Edited: 0001764
2013-08-29 08:23 geoffclare Note Added: 0001766
2013-08-29 08:36 geoffclare Note Edited: 0001766
2013-08-29 11:17 steffen Note Added: 0001770
2013-08-30 08:53 shware_systems Note Added: 0001780
2013-09-06 06:14 shware_systems Note Added: 0001819
2013-09-06 06:21 dalias Note Added: 0001820
2013-09-06 09:13 geoffclare Note Added: 0001822
2013-09-06 09:13 shware_systems Note Added: 0001823
2013-09-06 10:02 shware_systems Note Added: 0001824
2013-09-06 10:54 shware_systems Note Edited: 0001824
2013-09-06 11:44 steffen Note Added: 0001825
2013-09-06 12:27 jilles Note Added: 0001826
2013-09-06 14:00 steffen Note Added: 0001827
2013-09-06 14:11 steffen Note Added: 0001828
2013-09-06 14:54 steffen Note Added: 0001829
2013-09-06 17:32 shware_systems Note Added: 0001830
2013-09-11 11:45 geoffclare Note Added: 0001831
2013-09-11 12:43 steffen Note Added: 0001832
2013-09-12 11:41 steffen Note Added: 0001833
2013-09-12 16:43 nick Note Added: 0001834
2013-09-12 16:44 nick Final Accepted Text => See Note: 0001834
2013-09-12 16:44 nick Status New => Resolution Proposed
2013-09-12 16:44 nick Resolution Open => Accepted As Marked
2013-09-12 16:44 nick Tag Attached: issue8
2013-09-12 17:33 Don Cragun Note Edited: 0001834
2013-09-12 17:36 Don Cragun Note Edited: 0001834
2013-09-12 17:54 Don Cragun Note Edited: 0001834
2013-09-12 18:21 rhansen Note Edited: 0001834
2013-09-12 18:27 rhansen Note Edited: 0001834
2013-09-12 18:31 rhansen Note Edited: 0001834
2013-09-12 18:35 rhansen Note Edited: 0001834
2013-09-12 18:37 rhansen Note Edited: 0001834
2013-09-12 18:37 rhansen Note Edited: 0001834
2013-09-19 15:14 geoffclare Note Edited: 0001834
2013-09-19 15:16 geoffclare Status Resolution Proposed => Resolved
2013-09-19 17:28 shware_systems Note Added: 0001837
2013-09-19 18:04 shware_systems Note Edited: 0001837
2013-09-19 19:08 steffen Note Edited: 0001834
2015-04-29 14:55 mirabilos Note Added: 0002642
2020-03-25 15:51 geoffclare Status Resolved => Applied


Mantis 1.1.6[^]
Copyright © 2000 - 2008 Mantis Group
Powered by Mantis Bugtracker