[Dwarf-Discuss] DWARF on systems where memory is not byte addressable

Joeri van Ruth joeri@ace.nl
Thu Jul 26 05:45:48 GMT 2012


Hello all, I am wondering about how to deal with platforms with word
memories, by which I mean that the smallest addressable unit in memory
is (in our current case) 32 bits wide.  This means that at the C level, 

	sizeof(char) == sizeof(short) == sizeof(int) == 1,

so far so good.  However, we are having problems with gdb.  I am aware
that this may be entirely gdb specific but I do note that the standard
does not spend a lot of words on the issues that arise here, that's
why I bring this up here.

The standard does not seem to define anywhere how large a byte is
supposed to be.  Historically, older architectures used anything out
of 6, 7, 8 and 9 bits bytes which is why networking standards tend to
speak of octets instead.  DWARF seems to assume 8 bit bytes, hence the
LEB128 encoding, but it does not state so explicitly unless I
overlooked something.

A C oriented view might consider that sizeof(char) == sizeof(int), and
as C does not distinguish clearly between byte and char, take a byte
to be 32 bits wide.  But even that's not always the case as sometimes
we see word oriented platforms which still take the arithmetic size of
char to be 8 bits, requiring frequent sign- and zero-extension when
assigning to a char or short variable.

However, I assume that if the DWARF standard were explicit about the
size of a byte, it would define a byte to be 8 bits.

The problem we see with gdb hinges on the DW_AT_byte_size attribute of
a type descriptor.  Gdb uses it for at least two purposes:

	- to perform address arithmetic

	- to determine the bit size of values

If we set the DW_AT_byte_size of an integer to 1, gdb will do the
address arithmetic correctly, that is, look for int_array[1] at
address int_array + 1, not + 4, but if you ask for the value of an int
variable it will only display the lower 8 bits.

If we set the DW_AT_byte_size of int to 4, which indeed sounds
more consistent given the name _byte_size, gdb will extract the full
32 bits of the value but get the address arithmetic wrong as
int_array[1] now actually accesses int_array[4].

It seems to me that the proper way would be to fix gdb to take the
addressing size unit into account as general knowledge of the target
platform, but I can't believe we're the first to come across this.  I
wonder if anyone on this list has already faced similar issues and
what they did about it.

Best regards,

Joeri van Ruth

-- 
Joeri van Ruth, ACE Associated Compiler Experts
De Ruyterkade 113, 1011 AB Amsterdam, The Netherlands.
Tel: +31 20 6646416, Fax: +31 20 6750389,
mailto:joeri at ace.nl, http://www.ace.nl
___________________________________________________________________________
This e-mail and any files transmitted with it are confidential. Any tech-
nical information contained herein is supplied as-is, and no rights can
be derived therefrom. Unless you are the intended recipient, you should
not read, disclose, copy, or otherwise use the information in this message.
If you have received the message in error, please notify the sender by
reply e-mail immediately and delete the message and all copies thereof.






More information about the Dwarf-discuss mailing list