THE MEMORY OF THE CITY
log in
Syngenesis
The measure of a machine (Computing in science fiction, Part I)
2013-07-11 00:16:16

The measure of a machine (Computing in science fiction, Part I)

In a scant century, computers utterly reshaped human civilization. They rode with us as we catapulted from the vestiges of a deeply insecure, ceremonial society, where but a few people in a few countries had the means to attain lives of intellectual fulfilment, into one that has—increasingly—become quite the opposite. Along the way, our calculating engines have inspired us to change the way we think, and the absolutism of that new wave of conceptual organization has both depressed us in its omniscience and taught us to forgive in its impartiality. Computers have given us the power to behold nature, at every level, and through that we have been humbled.

It's not surprising, then, that unique or advanced inventions—specifically computers—are prominent in so many science fiction stories.

In this three-part series, we'll look at how computers work, how alien computers might work, and how to write for convincing artificial intelligence.


Most of them look like this.

In the guise of thinking machines, computers have played every human role in every human experience imaginable, from caring for a child to attaining enlightenment on the battlefield, and perhaps more. Ships need control systems. Explorers need survey instruments. Generals need war theatre simulations. Some machines even bake Black Forest cake.

But what makes a good fictional computer? What does it mean when a computer is quantum, or mechanical, or photonic? What are the limitations of computers, and how would they work? Better still—how will they work in the future? Or if they're from another planet? Let's investigate.

In computer science, there is a fundamental concept called Turing completeness. Put simply, a Turing-complete computer can calculate any mathematical function that doesn't require an infinite amount of time to process. As far as we know, all processes in the universe can be simulated with a Turing-complete computer, so it has been suggested that the universe itself may very well be defined by such a computer.

To be Turing-complete, a computer only needs a few things: some kind of memory where it can store data, the ability to read and modify values in that memory based on a list of instructions, the ability to jump around on that list of instructions, and the ability to make decisions based on if-then statements. With these four basic elements, any computer program can be written, although it may very well be slow.

Our computers today work this way. We make heavy use of parallelism to speed up basic instructions, such as in the vector processor of a graphics card, but at the heart of it, they jump through a list of instructions just like a BASIC program—

10 PRINT "HELLO"
20 GOTO 10

—we just abstract that away to make things look more pleasant to work with. At some level, though, that's what the computer actually sees.

This is vitally important to understanding how computers work because it places essential limits (or so we believe) on how quickly some work can be done. If you have a list of names and don't know what order they're in, for example, then you have no choice but to check each one. There is absolutely no way to make this process more efficient unless you sort the list somehow, which will always take at least as long.

Similarly, if you want to find the shortest round-trip between several cities, then you must check all of the possible routes by brute-force, and if you want to sort a deck of cards, you must look at all of the cards at least once, and some cards more than once. (Specifically, you have to look at one extra card every time the number of cards in the deck doubles. Sometimes less if you're lucky and know how many cards you have in advance.)

Algorithmically, there's no way around any of this. The machine has to do all of the checking we do—although generally without the benefit of the short-term memory juggling and peripheral vision cues that we typically employ. There are ways to approximate the work, and even to do so with certain guarantees about how bad the worst-case guess will be, but these methods have their own efficiency constraints.

On the bright side, some problems can be beaten with parallelism: work that doesn't require much/any influence from a previous step can be split up between multiple computers or processors, like rendering different parts of an image or searching subsets of a large database. Sometimes calculations are semi-parallel, and have parallelised phases mixed with single-processor phases. The same amount of work must be done, but the problem is no longer as bottlenecked.

Other problems can be beaten with quantum computing. Each bit added to a quantum computer's memory doubles its capacity due to superposition, making problems that involve very long combinations relatively easy. Such computers are still limited to Turing-complete problems, but they can attain speeds that are instantaneous or linear due to their peculiar problem-solving methods.

At the time of this writing, it looks like quantum computing will become progressively more popular for certain tasks. Current quantum computers are not suitable for general purpose use, although there are already commercial products like the D-Wave One that can find the prime factors of very large numbers efficiently, which may potentially defeat most existing forms of cryptography. Other quantum computers will some day provide us with means of generating methods of encryption that cannot be broken in this manner, appropriately termed quantum cryptography.

Most likely, because of the focus on this key application and problems like it, it will be many years before quantum computing displaces classical electronic or photonic circuitry entirely, so a near-future human computer should only have quantum elements in it.

Click here for part two.
Syngenesis comment   8452.908 tgc / 2013.524 ce