uint32_t type

Started by csfreebird, April 17, 2013, 07:06:35 AM

Previous topic - Next topic

csfreebird

How to define a variable which contains one uint32_t integer?

Any example?

Lutz

#1
By default integers in newLISP are 64 bits, but when interfacing with external C-libraries than precision is truncated automatically to 32, 16, or 8 bit, if asked for by the imported function.



When reading or writing to files or memory use pack and unpack to format an integer to the exact number of bits (8,16,32,64) required, signed or unsigned. Then there are also the functions get-char, get-int and get-long to get 8,32 and 64 bit sized integers from a memory address. Once any integer smaller than 64-bit is held by a variable, it would be able to grow to a signed 64-bit number before it overflows when adding to it.

csfreebird

#2
I used pack with unpack for this.

(define (u4 v)
(first (unpack "lu" (pack "lu" v))))


Here is some examples for calling u4 function:

> (define (u4 v)(first (unpack "lu" (pack "lu" v))))
(lambda (v) (first (unpack "lu" (pack "lu" v))))
> (u4 8)
8
> (u4 -12)
4294967284
>


But, it's not convenient, and inefficient.

Lutz

#3
You could use bit masks:


> (& 0xFFFFFFFF -12)
4294967284
>

csfreebird

#4
it's interesting, why it doesn't change the value, but makes it look like one unsigned integer instead of signed integer?

Lutz

#5
On current computers and OSs numbers negative numbers are encoded as two's complement:


> (bits -12)
"1111111111111111111111111111111111111111111111111111111111110100"
> (bits (& 0xFFFFFFFF -12))
"11111111111111111111111111110100"
>
> 0b1111111111111111111111111111111111111111111111111111111111110100
-12
> 0b11111111111111111111111111110100
4294967284
>
> 0b0000000000000000000000000000000011111111111111111111111111110100
4294967284
>


http://en.wikipedia.org/wiki/Two%27s_complement">http://en.wikipedia.org/wiki/Two%27s_complement



The above would be  the 32-bit representation of -12 in a 32-bit integer field. As newLISP handles all integers in 64-bit fields, it fills all 32 higher bits with 1's. The bit mask 0xFFFFFFFF then masks them out again filling all 32 high bits with 0's making a 64-bit 4294967284, but if you look only at the 32 lower bits, you have -12 again.



In order to help you, it would be useful if you tell us what you are trying to do from an application point of view. Why do you want to convert 64-bit integers to 32-bit integers? What is your application? Are you interfacing with external libraries? Are you writing out binary data to a file?

csfreebird

#6
My newlisp app is used for simulating 10,000 devices which connect to my TCP server.

I am doing some parallel testing recently.

We transfer 16bit or 32bit integer with big-endian order via TCP.

The network traffic means cost for our business. :)