[Dwarf-Discuss] DWARF piece questions

Andreas Arnez arnez@linux.vnet.ibm.com
Fri Jan 27 19:15:54 GMT 2017


On Fri, Jan 27 2017, Michael Eager wrote:

> On 01/27/2017 06:49 AM, Andreas Arnez wrote:
>> But if some "even less significant" bits were added (such as with
>> z/Architecture, where a newer release extended 64-bit FP-registers to
>> 128-bit vectors), then the numbering scheme has to change.  This breaks
>> compatibility with the debug info in existing programs.  That's the
>> problem I was trying to outline above.
>
> You need to emulate the old architecture on the new architecture.  You
> cannot assume that DWARF generated for an old architecture will be
> usable without interpretation on an arbitrarily different new
> architecture.

So, from a DWARF perspective, you'd expect that all libraries shall be
recompiled when migrating from an older x86-64 CPU to a newer one that
has AVX-512?  Or, as in the z/Architecture case, from a zEC12 to a z13
system?  You don't consider it valid for old and new binaries to coexist
in the same program?

>> I still haven't understood *why* DWARF insists on trying to establish a
>> universal register bit numbering scheme, and just for the definition of
>> DW_OP_bit_piece?  I don't know of any other normative source that tries
>> this; and DWARF usually avoids going into such low-level detail, leaving
>> it to the ABI instead.  The fact that it does in this case also breaks
>> the link to DW_OP_piece, where the placement *can* be freely defined by
>> the ABI.
>
> With the exception I mentioned above, DWARF doesn't mention bit
> numbering.  DWARF makes no mention of bit numbering with regard to
> registers, and clearly doesn't establish a universal register bit
> numbering scheme.

It does at least implicitly, when defining the placement rule for
DW_OP_bit_piece.  This definition implies a "universal" bit numbering
scheme that starts with 0 at a register's "least significant bit".

> Different ABIs number register bits in different ways.
>
>> For instance, why does DWARF not define the bit numbering for all kinds
>> of bit pieces (memory, register, stack values, implicit values) in the
>> same way?  All objects we can take pieces from have a memory
>> representation, so we could always define the bit order to be the same
>> as for memory objects.  This would cause much less special handling for
>> DWARF producers/consumers.
>
> We are discussing adding clarifying text which will make it clear that
> register values, implicit values, and stack values are all handled in
> the same fashion.

I don't think that's a good idea.  My point above was just to question
the motivation for the current definition of DW_OP_bit_piece.

> Memory is a more complex issue, because this is where the issues of
> little-endian and big-endian come into play, and not all architectures
> map values to memory in the same fashion.

Curious, I would say memory is the simple case, because the memory bit
order is defined by all ABIs I know of.  Also, DWARF relies on it
anyhow, for instance in the definiton of DW_AT_data_bit_offset.

> The ordering of values in memory is not the same as in registers.

Not sure what you're trying to say with that.

>> The only possible reasons I can think of for *not* choosing memory bit
>> order for register bit pieces are:
>>
>> (a) To make DW_OP_piece(n) equivalent to DW_OP_bit_piece(8*n, 0).  But
>>      then we must leave the bit numbering to the ABI instead of trying to
>>      define a univeral one.
>
> Exactly the opposite appears to be true.  Defining DW_OP_piece in terms
> of something defined (or perhaps undefined) in an ABI make it possible to
> create situations where this equivalence is false.

Maybe you misread my point?  I wrote "register *bit* pieces", i.e., I
was discussing the definition of DW_OP_bit_piece.  I do not question
that the placement rule of DW_OP_piece shall be defined by the ABI.

>> Is there any advantage of the "bit significance" numbering scheme at
>> all?  I can't think of any.
>
> DWARF refers to most-significant bit and least significant bit.  These
> concepts appear to be well defined and independent of any bit numbering
> scheme used by the ABI.

They are not, when applied to registers.  But even if they were, what's
the practical advantage of being independent of the ABI?  We have ABI
dependencies all over the place.

--
Andreas




More information about the Dwarf-discuss mailing list