A self-referential macro is one whose name appears in its definition. Recall that all macro definitions are rescanned for more macros to replace. If the self-reference were considered a use of the macro, it would produce an infinitely large expansion. To prevent this, the self-reference is not considered a macro call. It is passed into the preprocessor output unchanged. Consider an example:
#define foo (4 + foo)
is also a variable in your program.
Following the ordinary rules, each reference to
will expand into
(4 + foo)
; then this will be rescanned and will expand into
(4 + (4 + foo))
; and so on until the computer runs out of memory.
The self-reference rule cuts this process short after one step, at
(4 + foo)
. Therefore, this macro definition has the possibly useful effect of causing the program to add 4 to the value of
is referred to.
In most cases, it is a bad idea to take advantage of this feature. A person reading the program who sees that
is a variable will not expect that it is a macro as well. The reader will come across the identifier
in the program and think its value should be that of the variable
, whereas in fact the value is four greater.
One common, useful use of self-reference is to create a macro which expands to itself. If you write
#define EPERM EPERM
then the macro
. Effectively, it is left alone by the preprocessor whenever it's used in running text. You can tell that it's a macro with ‘
’. You might do this if you want to define numeric constants with an
, but have ‘
’ be true for each constant.
If a macro
expands to use a macro
, and the expansion of
refers to the macro
, that is an indirect self-reference
is not expanded in this case either. Thus, if we have
#define x (4 + y) #define y (2 * x)
expand as follows:
x ==> (4 + y) ==> (4 + (2 * x)) y ==> (2 * x) ==> (2 * (4 + y))
Each macro is expanded when it appears in the definition of the other macro, but not when it indirectly appears in its own definition.