Log in

No account? Create an account


floaty fiddling

« previous entry | next entry »
6th May 2008 | 20:24

So I needed some Lua functions to extract bit fields from an integral value, specifically struct stat.st_mode. Lua only has floating point numbers, and its standard library doesn't extend beyond ANSI C. So I wrote the following. (Note that a % b == a - floor(a/b)*b.)

   function mask(lobit, hibit, num)
      local toolo = num % 2^lobit
      return (num - toolo) % 2^hibit

   function bit(bit, num)
      return num / 2^bit % 2 >= 1

| Leave a comment | Share

Comments {8}

The Bellinghman

from: bellinghman
date: 6th May 2008 19:55 (UTC)

A lovely example of using a spade when you really want a screwdriver ...

And yes, I've been there before. Or somewhere similar - the calculator I have on my desk is one I had to go out and buy, because the machine I was dealing with only supported output in decimal, whereas I actually needed to know bit statuses.

Twenty plus years later, I've yet to find out how to change the battery, but it's still the one I reach for. Oh yes, no battery - it was one of the earliest solar calculators from Casio.

Reply | Thread

Tony Finch

from: fanf
date: 7th May 2008 08:44 (UTC)

The alternative was to write a C extension to do the bit twiddling, but the overhead wasn't worth it for so few lines of code.

Reply | Parent | Thread

from: ingulf
date: 6th May 2008 20:59 (UTC)

Lua has only floats? *wince* Okay, maybe it's not such a good language for our integer-only processor...

I did something like that ages ago. At the time perl had a bignum module, but it didn't support bitwise operations (mainly because its representation was decimal strings). Be warned: I kept getting bitten by special cases. I can't remember what they were, but I think they had to do with signedness.

Question: Arthur Norman, in his lecture on multiplication of really large numbers, gave a (very short) proof that division does not need to be more expensive than multiplication. Can you remember what it was?

It occurred to me recently that for floats, you can have fast division by writing a float class which has
one exponent but two mantissas. The value of the object is a/b, where a is (exp,mant1) and b is (0,mant2).
Division has the same cost as multiplication, but unfortunately so do addition and subtraction, so this scheme is unlikely to be very useful. But I don't know if I invented that or heard it in ACN's lecture.

Reply | Thread

Tony Finch

from: fanf
date: 7th May 2008 08:59 (UTC)

Lua actually has a configurable numeric type, especially for embedded users. It seems to have a culture of mutating the language semantics if necessary for specific projects.

My bit twiddling requirements are very simple so I'm not worried about negative numbers :-)

I don't have a clue about ACN's division complexity proof.

Reply | Parent | Thread

The Bellinghman

from: bellinghman
date: 7th May 2008 09:19 (UTC)

On a high theoretical level, all additions are subtractions, and all divisions are multiplications. In the latter case, finding the multiplicand is an exercise for the reader.

(Though it is an easy optimisation if it's a compile time known value. Or loop-invariant.)

Reply | Parent | Thread

The Bellinghman

from: bellinghman
date: 7th May 2008 09:17 (UTC)

I was doing a Code39 Barcode check digit algorithm the other day, in a language that supports only integers and strings.

I could have modified the language to support arrays, but heck, it could do with a for() loop first. The algorithm didn't really need an array - not when it has strings with addressable characters.

Reply | Parent | Thread


from: pjc50
date: 7th May 2008 10:31 (UTC)

http://portal.acm.org/citation.cfm?id=806321&dl= , possibly?

I remember his approach to fast multiplication (FFT, convolve, FFT); I'd guess that he had some fast method for computing 1/x, which would make a/b as fast as a*(1/b).

Reply | Parent | Thread

Jos Dingjan

from: happydisciple
date: 7th May 2008 16:33 (UTC)

[...] his approach to fast multiplication (FFT, convolve, FFT); [...]

Whenever I need to do a convolution or, more likely, a correlation/autocorrelation, I tend to go FFT, multiply, FFT. In what case/by what algorithm is a direct convolution faster than multiplication?

Reply | Parent | Thread