The Impersistence of Memory (opens in new tab)

When Ryan Williams plugged a novel technique developed to test the bounds of a key part of complexity theory into his own framework, he found the result hard to believe. The Massachusetts Institute of Technology (MIT) computer scientist had expected to reduce the memory space needed by a vast range of algorithms. But not to the degree it did. One longstanding assumption about the complexity of computer programs with predictable runtimes is how much memory they need relative to the number of steps or time they take to complete. Two complexity classes that theoreticians believe capture this relationship are P and PSPACE. The former is the collection of programs that complete in a polynomial number of steps, based on the number of data elements the algorithm needs to process. Complexity theorists do not see the two sets as being identical. But it was up to this point also a reasonable assumption they might not be all that different. “Imagine a computation that produces a new bit of information in every step, based on the bits that it has computed so far. Over t steps of time, it may generate up to t new bits of information in order to reach the final result,” Williams explained. Take a large logic circuit where bits can flow arbitrarily from the inputs to a final set of output gates. It seems reasonable to assume you would need to collect most, if not all, the intermediate states to be sure of computing the correct result. Other memory-efficient programs may need far more time to complete than will fit into P. What Williams found shocking about his result, which was the first to move the apparent bounds of PSPACE to a significant degree in decades, is how big the difference seems to be. Rather than paring the total storage down to something like t/log t , which seemed likely, he wound up with a result for memory usage that was a little more than the square root of t . “It seemed beyond belief that every complex computation could somehow be reimplemented to use a much smaller amount of space than time,” Willams said. The result may have indicated the way to new approaches to determining what separates P and PSPACE. To make the novel proof work, the MIT researcher began with logic-type circuits, processing them on a multitape Turing machine. The idea was to build an approach in which the algorithm will resort to recomputing intermediate results, rather than just storing every partial result for possible later use. He then conceptually divided the tapes into blocks of execution, each block corresponding to a small tree of a larger branching program. The sub-trees connect when a result in one block affects another to build a chain of execution that passes the minimum amount of data needed between blocks. The multitape Turing machine made this an effective strategy because it is possible to organize the processing in blocks without demanding the program explicitly store its progress.

Loading more...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help