Jump to ContentJump to Main Navigation
Computational Complexity and Statistical Physics$
Users without a subscription are not able to see the full content.

Allon Percus, Gabriel Istrate, and Cristopher Moore

Print publication date: 2005

Print ISBN-13: 9780195177374

Published to Oxford Scholarship Online: November 2020

DOI: 10.1093/oso/9780195177374.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 17 October 2021

Analyzing Search Algorithms with Physical Methods

Analyzing Search Algorithms with Physical Methods

Chapter 3 Analyzing Search Algorithms with Physical Methods
Computational Complexity and Statistical Physics

Simona Cocco

Rémi Monasson

Oxford University Press

The computational effort needed to deal with large combinatorial structures varies considerably with the task to be performed and the resolution procedure used [425]. The worst-case complexity of a decision or optimization problem is defined as the time required by the best algorithm to treat any possible input to the problem. For instance, the worst-case complexity of the problem of sorting a list of n numbers scales as n log n: there exist several algorithms that can order any list in at most ~ n log n elementary operations, and none with asymptotically fewer operations. Unfortunately, the worst-case complexities of many important computational problems, called NP-complete, are not known. Partitioning a list of n numbers in two sets with equal partial sums is one among hundreds of known NP-complete problems. It is a fundamental conjecture of theoretical computer science that there exists no algorithm capable of partitioning any list of length n, or of solving any other NP-complete problem with inputs of size n, in a time bounded by a polynomial of n. Therefore, when trying to solve such a problem exactly, one necessarily uses algorithms that may take exponential time on some inputs. Quantifying how“frequent” these hard inputs are for a given algorithm is the question answered by the analysis of algorithms. We will present an overview of recent work by physicists to address this point, and more precisely to characterize the average performance—hereafter simply called complexity—of a given algorithm over a distribution of inputs to a computational problem. The history of algorithm analysis by physical methods and ideas is at least as old as the use of computers by physicists. One well-established chapter in this history is the analysis of Monte Carlo sampling algorithms for statistical mechanics models. It is well known that phase transitions, that is, abrupt changes in the physical properties of the model, can imply a dramatic increase in the time necessary for the sampling procedure. This phenomenon is commonly known as critical slowing down. The physicist's insight comes from the analogy between the dynamics of algorithms and the physical dynamics of the system. That analogy is quite natural: in fact many algorithms mimic the physical dynamics.

Keywords:   backtracking, coding theory, decoding, exponential time, gradient descent, low-density parity check (LDPC) codes, metastability, parity check matrix

Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.

Please, subscribe or login to access full text content.

If you think you should have access to this title, please contact your librarian.

To troubleshoot, please check our FAQs , and if you can't find the answer there, please contact us .