[Dwarf-Discuss] DWARF piece questions

Andreas Arnez arnez@linux.vnet.ibm.com
Fri Jan 27 14:49:29 GMT 2017


On Thu, Jan 26 2017, Michael Eager wrote:

> On 01/26/2017 11:17 AM, Andreas Arnez wrote:
>> Exactly: the current DWARF text*differs*  from the usual "defined by the
>> ABI"-principle when it states for DW_OP_bit_piece: "If the location is a
>> register, the offset is from the least significant bit end of the
>> register".  This definition limits the ABI's freedom such that register
>> growth can only be anchored at the "least significant bit".
>
> That's not the case.  The ABI is free to put a value where ever it
> wishes in a register.  The DWARF description will be different,
> depending on where the ABI puts the value, indexed from the
> least-significant bit of the register.  DW_OP_bit_piece is designed
> explicitly to support this.
>
> I have to admit that I'm unclear exactly what you mean by "register
> growth".  But if you load a 16-bit value into the most-significant
> half of a 32-bit register (is this growing a register?) then you would
> describe the value in the register with a length of 16 and an offset
> of 16.  Same applies for 32-bit values in 64-bit registers.

By "register growth" I mean those cases where a new ISA release upgrades
a register to a larger size.  This happened when upgrading various
architectures from 32 to 64 bit, when extending floating-point registers
to vector registers, when increasing the size of existing vector
registers, etc.

Now assume that the ABI defines a bit numbering scheme for
DW_OP_bit_piece according to the current DWARF standard's definition,
and that a new version of the ABI supports a new ISA release with
"grown" registers.

If all new bits are "even more significant", then the numbering scheme
from the previous architecture version can be preserved.

But if some "even less significant" bits were added (such as with
z/Architecture, where a newer release extended 64-bit FP-registers to
128-bit vectors), then the numbering scheme has to change.  This breaks
compatibility with the debug info in existing programs.  That's the
problem I was trying to outline above.

I still haven't understood *why* DWARF insists on trying to establish a
universal register bit numbering scheme, and just for the definition of
DW_OP_bit_piece?  I don't know of any other normative source that tries
this; and DWARF usually avoids going into such low-level detail, leaving
it to the ABI instead.  The fact that it does in this case also breaks
the link to DW_OP_piece, where the placement *can* be freely defined by
the ABI.

For instance, why does DWARF not define the bit numbering for all kinds
of bit pieces (memory, register, stack values, implicit values) in the
same way?  All objects we can take pieces from have a memory
representation, so we could always define the bit order to be the same
as for memory objects.  This would cause much less special handling for
DWARF producers/consumers.

The only possible reasons I can think of for *not* choosing memory bit
order for register bit pieces are:

(a) To make DW_OP_piece(n) equivalent to DW_OP_bit_piece(8*n, 0).  But
    then we must leave the bit numbering to the ABI instead of trying to
    define a univeral one.

(b) To support "register growth" as described above.  But then we must
    leave the bit numbering to the ABI as well, because DWARF does not
    know the direction of growth.

Is there any advantage of the "bit significance" numbering scheme at
all?  I can't think of any.

--
Andreas




More information about the Dwarf-discuss mailing list