Jump to ContentJump to Main Navigation
Computational Complexity and Statistical Physics$
Users without a subscription are not able to see the full content.

Allon Percus, Gabriel Istrate, and Cristopher Moore

Print publication date: 2005

Print ISBN-13: 9780195177374

Published to Oxford Scholarship Online: November 2020

DOI: 10.1093/oso/9780195177374.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 26 October 2021

Introduction: Where Statistical Physics Meets Computation

Introduction: Where Statistical Physics Meets Computation

Chapter:
Chapter 1 Introduction: Where Statistical Physics Meets Computation
Source:
Computational Complexity and Statistical Physics
Author(s):

Allon G. Percus

Gabriel Istrate

Cristopher Moore

Publisher:
Oxford University Press
DOI:10.1093/oso/9780195177374.003.0007

Computer science and physics have been closely linked since the birth of modern computing. This book is about that link. John von Neumann’s original design for digital computing in the 1940s was motivated by applications in ballistics and hydrodynamics, and his model still underlies today’s hardware architectures. Within several years of the invention of the first digital computers, the Monte Carlo method was developed, putting these devices to work simulating natural processes using the principles of statistical physics. It is difficult to imagine how computing might have evolved without the physical insights that nurtured it. It is impossible to imagine how physics would have evolved without computation. While digital computers quickly became indispensable, a true theoretical understanding of the efficiency of the computation process did not occur until twenty years later. In 1965, Hartmanis and Stearns [227] as well as Edmonds [139, 140] articulated the notion of computational complexity, categorizing algorithms according to how rapidly their time and space requirements grow with input size. The qualitative distinctions that computational complexity draws between algorithms form the foundation of theoretical computer science. Chief among these distinctions is that of polynomial versus exponential time. A combinatorial problem belongs in the complexity class P (polynomial time) if there exists an algorithm guaranteeing a solution in a computation time, or number of elementary steps of the algorithm, that grows at most polynomially with input size. Loosely speaking, such problems are considered computationally feasible. An example might be sorting a list of n numbers: even a particularly naive and inefficient algorithm for this will run in a number of steps that grows as O(n2), and so sorting is in the class P. A problem belongs in the complexity class NP (non-deterministic polynomial time) if it is merely possible to test, in polynomial time, whether a specific presumed solution is correct. Of course, P ⊆ NP: for any problem whose solution can be found in polynomial time, one can surely verify the validity of a presumed solution in polynomial time.

Keywords:   Boolean formulas, average-case analysis, clause density, decision problems, exponential time, ferromagnetic, glassy behavior, order parameter, partition function

Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.

Please, subscribe or login to access full text content.

If you think you should have access to this title, please contact your librarian.

To troubleshoot, please check our FAQs , and if you can't find the answer there, please contact us .