<- Back
Comments (50)
- teo_zeroWhen you have so few bits, does it really make sense to invent a meaning for the bit positions? Just use an index into a "palette" of pre-determined numbers.As a bonus, any operation can be replaced with a lookup into a nxn table.
- FarmerPotatoI too want fewer bits of mantissa in my floating point!But what I wish is that there had been fp64 encoding with a field for number of significant digits.strtod() would encode this, fresh out of an instrument reading (serial). It would be passed along. It would be useful EVEN if it weren't updated by arithmetic with other such numbers.Every day I get a query like "why does the datum have so many decimal digits? You can't possibly be saying that the instrument is that precise!"Well, it's because of sprintf(buf, "%.16g", x) as the default to CYA.Also sad is the complaint about "0.56000 ... 01" because someone did sprintf("%.16f").I can't fix this in one class -- data travels between too many languages and communication buffers.In short, I wish I had an fp64 double where the last 4 bits were ALWAYS left alone by the CPU.
- conaclosThere is a relevant Wikipedia page about minifloats [0]> The smallest possible float size that follows all IEEE principles, including normalized numbers, subnormal numbers, signed zero, signed infinity, and multiple NaN values, is a 4-bit float with 1-bit sign, 2-bit exponent, and 1-bit mantissa.[0] https://en.wikipedia.org/wiki/Minifloat
- sc0ttyd9 years ago, I shared this as an April Fools joke here on HN.It seems that life is imitating art.https://github.com/sdd/ieee754-rrp
- Figs> The notation ExMm denotes a format with x exponent bits and y mantissa bits.Shouldn't that be m mantissa bits (not y) -- i.e. typo here -- or am I misunderstanding something?
- karmakazeThere's an "Update:" note for a next post on NF4 format. As far as I can tell this is neither NVFP4 nor MXFP4 which are commonly used with LLM model files. The thing with these formats is that common information is separated in batches so not a singular format but a format for groups of values. I'd like to know more about these (but not enough to go research them myself).
- nivertechFP2 spec: 00 -> 0.0 01 -> 1.0 10 -> Inf 11 -> NaN or 00 -> 0.0 01 -> 1.0 10 -> Inf 11 -> -Inf
- chrisjj> Programmers were grateful for the move from 32-bit floats to 64-bit floats. It doesn’t hurt to have more precisionSomeome didn't try it on GPU...
- PanzerschrekThis doesn't look like good a floating point format. NaNs and INFs are missing.
- ant6n> In ancient times, floating point numbers were stored in 32 bits.I thought in ancient times, floating point numbers used to be 80 bit. They lived in a funky mini stack on the coprocessor (x87). Then one day, somebody came along and standardized those 32 and 64 bit floats we still have today.
- brcmthrowawayDoes Apple GPU support any of these natively?Or does that matter - its the kernel that handles the FP format?
- burnt-resistorFP4 1:2:0:1 (other examples: binary32 1:8:0:23, 8087 ep 1:15:1:63)S:E:l:MS = sign bit present (or magnitude-only absolute value)E = exponent bits (typically biased by 2^(E-1) - 1)l = explicit leading integer present (almost always 0 because the leading digit is always 1 for normals, 0 for denormals, and not very useful for special values)M = mantissa (fraction) bitsThe limitations of FP4 are that it lacks infinities, [sq]NaNs, and denormals that make it very limited to special purposes only. There's no denying that it might be extremely efficient for very particular problems.If a more even distribution were needed, a simpler fixed point format like 1:2:1 (sign:integer:fraction bits) is possible.