HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Why do we not use continuous real quantities to represent continuous numbers

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
quantitiesrealwhyrepresentnumbersusenotcontinuous

Problem

I've just been doing some pondering, and given the fact that computers already operate on fundamentally continuous physical quantities, and then we have to use transistors to turn those real quantities into effectively discrete numbers, why don't any computer architectures just skip the intermediary and do away with floating point numbers to represent reals. I understand for general purpose computing this is probably more effort than it's worth, but high precision scientific computing is such such a large field, it surprises me that there aren't any general purpose chips that actually leverage the continuous nature behind the representation of numbers on computers. What are the challenges in making an analogue computing machine general purpose enough to be widely used, and are there any potential solutions to those problems? It's a bit of a vague question, I know, but at the same time my knowledge is quite vague so it's difficult for me to ask more precise questions. Any related links or avenues for further research would also be greatly appreciated.

Solution

There are many challenges:

-
Digital logic can be made reliable, so that small variations in voltage etc. are corrected. It's less clear how to do that with continuous/analog logic, so small noise and/or errors could compound.

-
This would at best let you represent fixed-point numbers, not floating-point numbers. For instance, if a wire carrying $x$ volts represents the number $x$, then you won't be able to accurately represent either $10^{-9}$ and $10^9$ with such a representation, yet standard floating-point arithmetic can easily represent very small and very large numbers with no difficulty. A lot of scientific computing relies heavily on floating-point numbers, so limiting to fixed-point numbers is a significant limitation. The up-front cost of designing hardware is very high, so if you're going to design hardware, it needs to be widely applicable.

-
The precision would likely be poor, due to noise in analog circuits. You might get two or three digits of precision (only). That's not enough for most scientific computing.

-
How would you multiply two numbers? It seems very difficult to design a circuit that multiplies two numbers: given one wire powered at $x$ volts and another wire powered at $y$ volts, you'd need a circuit that produces an output wire wired at $xy$ volts.

This makes analog circuits likely not very attractive for general-purpose computation or for scientific computing in general. Still, there have been proposals to use analog circuits for some very specific tasks, such as computing neural networks.

Context

StackExchange Computer Science Q#148499, answer score: 3

Revisions (0)

No revisions yet.