[Dwarf-Discuss] DWARF piece questions

Michael Eager eager@eagercon.com
Fri Jan 27 17:40:37 GMT 2017


On 01/27/2017 06:49 AM, Andreas Arnez wrote:
> On Thu, Jan 26 2017, Michael Eager wrote:
>
>> On 01/26/2017 11:17 AM, Andreas Arnez wrote:
>>> Exactly: the current DWARF text*differs*  from the usual "defined by the
>>> ABI"-principle when it states for DW_OP_bit_piece: "If the location is a
>>> register, the offset is from the least significant bit end of the
>>> register".  This definition limits the ABI's freedom such that register
>>> growth can only be anchored at the "least significant bit".
>>
>> That's not the case.  The ABI is free to put a value where ever it
>> wishes in a register.  The DWARF description will be different,
>> depending on where the ABI puts the value, indexed from the
>> least-significant bit of the register.  DW_OP_bit_piece is designed
>> explicitly to support this.
>>
>> I have to admit that I'm unclear exactly what you mean by "register
>> growth".  But if you load a 16-bit value into the most-significant
>> half of a 32-bit register (is this growing a register?) then you would
>> describe the value in the register with a length of 16 and an offset
>> of 16.  Same applies for 32-bit values in 64-bit registers.
>
> By "register growth" I mean those cases where a new ISA release upgrades
> a register to a larger size.  This happened when upgrading various
> architectures from 32 to 64 bit, when extending floating-point registers
> to vector registers, when increasing the size of existing vector
> registers, etc.

The requirements of running a program created for one architecture on
a different (even if similar) architecture is not something which the
DWARF specification defines.

It would seem to me that you need to define a mapping from the old
architecture to the new one, so that you have a clear definition of
what it means to reference a 32-bit register on a 64-bit architecture.
This is out of the scope of DWARF.

> Now assume that the ABI defines a bit numbering scheme for
> DW_OP_bit_piece according to the current DWARF standard's definition,
> and that a new version of the ABI supports a new ISA release with
> "grown" registers.

DWARF only mentions bit numbering in the definition of DW_OP_bit_piece
with regard to values in memory, and it probably should not be doing that.

> If all new bits are "even more significant", then the numbering scheme
> from the previous architecture version can be preserved.
>
> But if some "even less significant" bits were added (such as with
> z/Architecture, where a newer release extended 64-bit FP-registers to
> 128-bit vectors), then the numbering scheme has to change.  This breaks
> compatibility with the debug info in existing programs.  That's the
> problem I was trying to outline above.

You need to emulate the old architecture on the new architecture.  You
cannot assume that DWARF generated for an old architecture will be usable
without interpretation on an arbitrarily different new architecture.

> I still haven't understood *why* DWARF insists on trying to establish a
> universal register bit numbering scheme, and just for the definition of
> DW_OP_bit_piece?  I don't know of any other normative source that tries
> this; and DWARF usually avoids going into such low-level detail, leaving
> it to the ABI instead.  The fact that it does in this case also breaks
> the link to DW_OP_piece, where the placement *can* be freely defined by
> the ABI.

With the exception I mentioned above, DWARF doesn't mention bit numbering.
DWARF makes no mention of bit numbering with regard to registers, and clearly
doesn't establish a universal register bit numbering scheme.

Different ABIs number register bits in different ways.

> For instance, why does DWARF not define the bit numbering for all kinds
> of bit pieces (memory, register, stack values, implicit values) in the
> same way?  All objects we can take pieces from have a memory
> representation, so we could always define the bit order to be the same
> as for memory objects.  This would cause much less special handling for
> DWARF producers/consumers.

We are discussing adding clarifying text which will make it clear that
register values, implicit values, and stack values are all handled in
the same fashion.

Memory is a more complex issue, because this is where the issues of
little-endian and big-endian come into play, and not all architectures
map values to memory in the same fashion.  The ordering of values in
memory is not the same as in registers.

>
> The only possible reasons I can think of for *not* choosing memory bit
> order for register bit pieces are:
>
> (a) To make DW_OP_piece(n) equivalent to DW_OP_bit_piece(8*n, 0).  But
>      then we must leave the bit numbering to the ABI instead of trying to
>      define a univeral one.

Exactly the opposite appears to be true.  Defining DW_OP_piece in terms
of something defined (or perhaps undefined) in an ABI make it possible to
create situations where this equivalence is false.

>
> (b) To support "register growth" as described above.  But then we must
>      leave the bit numbering to the ABI as well, because DWARF does not
>      know the direction of growth.

DWARF doesn't attempt to specify how a program compiled for one architecture
should be interpreted on a different architecture.

>
> Is there any advantage of the "bit significance" numbering scheme at
> all?  I can't think of any.

DWARF refers to most-significant bit and least significant bit.  These
concepts appear to be well defined and independent of any bit numbering
scheme used by the ABI.

-- 
Michael Eager	 eager at eagercon.com
1960 Park Blvd., Palo Alto, CA 94306  650-325-8077



More information about the Dwarf-discuss mailing list