## floaty fiddling

#### « previous entry | next entry »

** 6th May 2008 | 20:24**

So I needed some Lua functions to extract bit fields from an integral value, specifically `struct stat.st_mode`. Lua only has floating point numbers, and its standard library doesn't extend beyond ANSI C. So I wrote the following. (Note that `a % b == a - floor(a/b)*b`.)

function mask(lobit, hibit, num) local toolo = num % 2^lobit return (num - toolo) % 2^hibit end function bit(bit, num) return num / 2^bit % 2 >= 1 end

from:bellinghmandate:6th May 2008 19:55 (UTC)Permalink

And yes, I've been there before. Or somewhere similar - the calculator I have on my desk is one I had to go out and buy, because the machine I was dealing with only supported output in decimal, whereas I actually needed to know bit statuses.

Twenty plus years later, I've yet to find out how to change the battery, but it's still the one I reach for. Oh yes, no battery - it was one of the earliest solar calculators from Casio.

## Reply | Thread

from:fanfdate:7th May 2008 08:44 (UTC)Permalink

## Reply | Parent | Thread

from:ingulfdate:6th May 2008 20:59 (UTC)Permalink

I did something like that ages ago. At the time perl had a bignum module, but it didn't support bitwise operations (mainly because its representation was decimal strings). Be warned: I kept getting bitten by special cases. I can't remember what they were, but I think they had to do with signedness.

Question: Arthur Norman, in his lecture on multiplication of really large numbers, gave a (very short) proof that division does not need to be more expensive than multiplication. Can you remember what it was?

It occurred to me recently that for floats, you can have fast division by writing a float class which has

one exponent but two mantissas. The value of the object is a/b, where a is (exp,mant1) and b is (0,mant2).

Division has the same cost as multiplication, but unfortunately so do addition and subtraction, so this scheme is unlikely to be very useful. But I don't know if I invented that or heard it in ACN's lecture.

## Reply | Thread

from:fanfdate:7th May 2008 08:59 (UTC)Permalink

My bit twiddling requirements are very simple so I'm not worried about negative numbers :-)

I don't have a clue about ACN's division complexity proof.

## Reply | Parent | Thread

from:bellinghmandate:7th May 2008 09:19 (UTC)Permalink

(Though it is an easy optimisation if it's a

compile timeknown value. Or loop-invariant.)## Reply | Parent | Thread

from:bellinghmandate:7th May 2008 09:17 (UTC)Permalink

I

couldhave modified the language to support arrays, but heck, it could do with a for() loop first. The algorithm didn't reallyneedan array - not when it has strings with addressable characters.## Reply | Parent | Thread

from:pjc50date:7th May 2008 10:31 (UTC)Permalink

I remember his approach to fast multiplication (FFT, convolve, FFT); I'd guess that he had some fast method for computing 1/x, which would make a/b as fast as a*(1/b).

## Reply | Parent | Thread

from:happydiscipledate:7th May 2008 16:33 (UTC)Permalink

Whenever I need to do a convolution or, more likely, a correlation/autocorrelation, I tend to go FFT, multiply, FFT. In what case/by what algorithm is a direct convolution faster than multiplication?

## Reply | Parent | Thread