
This monograph addresses the need to clarify basic mathematical concepts at the crossroad between gravitation and quantum physics. Selected mathematical and theoretical topics are exposed within a nottooshort, integrated approach that exploits standard and nonstandard notions in natural geometric language. The role of structure groups can be regarded as secondary even in the treatment of the gauge fields themselves. Twospinors yield a partly original ‘minimal geometric data’ approach to EinsteinCartanMaxwellDirac fields. The gravitational field is jointly represented by a spinor connection and by a soldering form (a ‘tetrad’) valued in a vector bundle naturally constructed from the assumed 2spinor bundle. We give a presentation of electroweak theory that dispenses with grouprelated notions, and we introduce a nonstandard, natural extension of it. Also within the 2spinor approach we present: a nonstandard view of gauge freedom; a firstorder Lagrangian theory of fields with arbitrary spin; an original treatment of Lie derivatives of spinors and spinor connections. Furthermore we introduce an original formulation of Lagrangian field theories based on covariant differentials, which works in the classical and quantum field theories alike and simplifies calculations. We offer a precise mathematical approach to quantum bundles and quantum fields, including ghosts, BRST symmetry and antifields, treating the geometry of quantum bundles and their jet prolongations in terms Frölicher's notion of smoothness. We propose an approach to quantum particle physics based on the notion of detector, and illustrate the basic scattering computations in that context.

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents an introduction to the role that cryptography plays in providing information security for technologies such as the Internet, mobile phones, payment cards, and wireless local area networks. Focusing on the fundamental principles that ground modern cryptography as they arise in modern applications, it avoids both an overreliance on transient current technologies and overwhelming theoretical research. A short appendix is included for those looking for a deeper appreciation of some of the concepts involved. By the end of this book, the reader will not only be able to understand the practical issues concerned with the deployment of cryptographic mechanisms, including the management of cryptographic keys, but will also be able to interpret future developments in this increasingly important area of technology.

This text provides an introduction to the theoretical, practical, and numerical aspects of image registration, with special emphasis on medical imaging. Given a socalled reference and template image, the goal of image registration is to find a reasonable transformation such that the transformed template is similar to the reference image. Image registration is utilized whenever information obtained from different viewpoints times and sensors needs to be combined or compared, and unwanted distortion needs to be eliminated. The book provides a systematic introduction to image registration and discusses the basic mathematical principles, including aspects from approximations theory, image processing, numerics, optimization, partial differential equations, and statistics, with a strong focus on numerical methods. A unified variational approach is introduced and enables a separation into datarelated issues like image feature or image intensitybased similarity measures, and problem inherent regularization like elastic or diffusion registration. This general framework is further used for the explanation and classification of established methods as well as the design of new schemes and building blocks including landmark, thinplatespline, mutual information, elastic, fluid, demon, diffusion, and curvature registration.

Pattern recognition prowess served our ancestors well. However, today we are confronted by a deluge of data that are far more abstract, complicated, and difficult to interpret than were annual seasons and the sounds of predators. The number of possible patterns that can be identified relative to the number that are genuinely useful has grown exponentially—which means that the chances that a discovered pattern is useful is rapidly approaching zero. Coincidental streaks, clusters, and correlations are the norm—not the exception. Our challenge is to overcome our inherited inclination to think that all patterns are meaningful.Computer algorithms can easily identify an essentially unlimited number of phantom patterns and relationships that vanish when confronted with fresh data. The paradox of big data is that the more data we ransack for patterns, the more likely it is that what we find will be worthless. Our challenge is to overcome our inherited inclination to think that all patterns are meaningful.

Understanding change is essential in most scientific fields. This is highlighted by the importance of issues such as shifts in public health and changes in public opinion regarding politicians and policies. Nevertheless, our measurements of the world around us are often imperfect. For example, measurements of attitudes might be biased by social desirability, while estimates of health may be marred by low sensitivity and specificity. In this book we tackle the important issue of how to understand and estimate change in the context of data that are imperfect and exhibit measurement error. The book brings together the latest advances in the area of estimating change in the presence of measurement error from a number of different fields, such as survey methodology, sociology, psychology, statistics, and health. Furthermore, it covers the entire process, from the best ways of collecting longitudinal data, to statistical models to estimate change under uncertainty, to examples of researchers applying these methods in the real world. The book introduces the reader to essential issues of longitudinal data collection such as memory effects, panel conditioning (or mere measurement effects), the use of administrative data, and the collection of multimode longitudinal data. It also introduces the reader to some of the most important models used in this area, including quasisimplex models, latent growth models, latent Markov chains, and equivalence/DIF testing. Further, it discusses the use of vignettes in the context of longitudinal data and estimation methods for multilevel models of change in the presence of measurement error.

This book is devoted to the mathematical modelling of electromagnetic materials. Electromagnetism in matter is developed with particular emphasis on material effects, which are ascribed to memory in time and nonlocality. Within the mathematical modelling, thermodynamics of continuous media plays a central role in that it places significant restrictions on the constitutive equations. Further, as shown in connection with uniqueness, existence and stability, variational settings, and wave propagation, a correct formulation of the pertinent problems is based on the knowledge of the thermodynamic restrictions for the material. The book is divided into four parts. Part I (chapters 1 to 4) reviews the basic concepts of electromagnetism, starting from the integral form of Maxwell’s equations and then addressing attention to the physical motivation for materials with memory. Part II (chapers 5 to 9) deals with thermodynamics of systems with memory and applications to evolution and initial/boundaryvalue problems. It contains developments and results which are unusual in textbooks on electromagnetism and arise from the research literature, mainly post1960s. Part III (chapters 10 to 12) outlines some topics of materials modelling — nonlinearity, nonlocality, superconductivity, and magnetic hysteresis — which are of great interest both in mathematics and in applications.

This book is devoted to problems of shape identification in the context of (inverse) scattering problems and problems of impedance tomography. In contrast to traditional methods which are based on iterative schemes of solving sequences of corresponding direct problems, this book presents a completely different method. The Factorization Method avoids the need to solve the (time consuming) direct problems. Furthermore, no apriori information about the type of scatterer (penetrable or impenetrable), type of boundary condition, or number of components is needed. The Factorization Method can be considered as an example of a Sampling Method. The book aims to construct a binary criterium on the known data to decide whether or not a given point z is inside or outside the unknown domain D. By choosing a grid of sampling points z in a region known to contain D, the characteristic function of D can be computed (in the case of finite data only approximately). The book also introduces some alternative Sampling Methods.

Optimal spacecraft trajectories are given a modern comprehensive treatment of the theory and important results. In most cases “optimal” means minimum propellant. Less propellant required results in more payload delivered to the destination. Both necessary and sufficient conditions for an optimal solution are analysed. Numerous illustrative examples are included and problems are provided at the ends of the chapters along with references. Newer topics such as cooperative rendezvous and secondorder conditions are considered. Seven appendices are included to supplement the text, some with problems. Both classical results and newer research results are included. A new test for a conjugate point is demonstrated. The book is both a graduatelevel textbook and a scholarly reference book.

The vast majority of random processes in the real world have no memory — the next step in their development depends purely on their current state. Stochastic realizations are therefore defined purely in terms of successive eventtime pairs, and such systems are easy to simulate irrespective of their degree of complexity. However, whilst the associated probability equations are straightforward to write down, their solution usually requires the use of approximation and perturbation procedures. Traditional books, heavy in mathematical theory, often ignore such methods and attempt to force problems into a rigid framework of closedform solutions.

Monoidal category theory serves as a powerful framework for describing logical aspects of quantum theory, giving an abstract language for parallel and sequential composition and a conceptual way to understand many highlevel quantum phenomena. Here, we lay the foundations for this categorical quantum mechanics, with an emphasis on the graphical calculus that makes computation intuitive. We describe superposition and entanglement using biproducts and dual objects, and show how quantum teleportation can be studied abstractly using these structures. We investigate monoids, Frobenius structures and Hopf algebras, showing how they can be used to model classical information and complementary observables. We describe the CP construction, a categorical tool to describe probabilistic quantum systems. The last chapter introduces higher categories, surface diagrams and 2Hilbert spaces, and shows how the language of duality in monoidal 2categories can be used to reason about quantum protocols, including quantum teleportation and dense coding. Previous knowledge of linear algebra, quantum information or category theory would give an ideal background for studying this text, but it is not assumed, with essential background material given in a selfcontained introductory chapter. Throughout the text, we point out links with many other areas, such as representation theory, topology, quantum algebra, knot theory and probability theory, and present nonstandard models including sets and relations. All results are stated rigorously and full proofs are given as far as possible, making this book an invaluable reference for modern techniques in quantum logic, with much of the material not available in any other textbook.

The basic goal of an inverse eigenvalue problem is to reconstruct the physical parameters of a certain system from the knowledge or desire of its dynamical behavior. Depending on the application, inverse eigenvalue problems appear in many different forms. This book discusses the fundamental questions, some known results, many applications, mathematical properties, a variety of numerical techniques, as well as several open problems.

This book comprehensively explores elasticity imaging and examines recent, important developments in asymptotic imaging, modeling, and analysis of deterministic and stochastic elastic wave propagation phenomena. It derives the best possible functional images for small inclusions and cracks within the context of stability and resolution, and introduces a topological derivativebased imaging framework for detecting elastic inclusions in the timeharmonic regime. For imaging extended elastic inclusions, accurate optimal control methodologies are designed and the effects of uncertainties of the geometric or physical parameters on stability and resolution properties are evaluated. In particular, the book shows how localized damage to a mechanical structure affects its dynamic characteristics, and how measured eigenparameters are linked to elastic inclusion or crack location, orientation, and size. Demonstrating a novel method for identifying, locating, and estimating inclusions and cracks in elastic structures, the book opens possibilities for a mathematical and numerical framework for elasticity imaging of nanoparticles and cellular structures.

This book presents many of the mathematical concepts, structures, and techniques used in the study of rays, waves, and scattering. It includes discussions of how ocean waves are refracted around islands and underwater ridges, how seismic waves are refracted in the earth's interior, how atmospheric waves are scattered by mountains and ridges, how the scattering of light waves produces the blue sky, and meteorological phenomena such as rainbows and coronas. This book is a valuable resource for practitioners, graduate students, and advanced undergraduates in applied mathematics, theoretical physics, and engineering. Bridging the gap between advanced treatments of the subject written for specialists and less mathematical books aimed at beginners, this unique mathematical compendium features problems and exercises throughout that are geared to various levels of sophistication, covering everything from Ptolemy's theorem to Airy integrals (as well as more technical material), and several informative appendixes.

Hybrid dynamical systems exhibit continuous and instantaneous changes, having features of continuoustime and discretetime dynamical systems. Filled with a wealth of examples to illustrate concepts, this book presents a complete theory of robust asymptotic stability for hybrid dynamical systems that is applicable to the design of hybrid control algorithms—algorithms that feature logic, timers, or combinations of digital and analog components. With the tools of modern mathematical analysis, this book unifies and generalizes earlier developments in continuoustime and discretetime nonlinear systems. It presents hybrid system versions of the necessary and sufficient Lyapunov conditions for asymptotic stability, invariance principles, and approximation techniques, and examines the robustness of asymptotic stability, motivated by the goal of designing robust hybrid control algorithms. This selfcontained and classroomtested book requires standard background in mathematical analysis and differential equations or nonlinear systems. It will interest graduate students in engineering as well as students and researchers in control, computer science, and mathematics.

How could we use living cells to perform computation? Would our definition of computation change as a consequence of this? Could such a cellcomputer outperform digital computers? These are some of the questions that the study of Membrane Computing tries to answer and are at the base of what is treated by this monograph. Descriptional and computational complexity of models in Membrane Computing are the two lines of research on which is the focus here. In this context this book reports the results of only some of the models present in this framework. The models considered here represent a very relevant part of all the models introduced so far in the study of Membrane Computing. They are in between the most studied models in the field and they cover a broad range of features (using symbol objects or string objects, based only on communications, inspired by intra and intercellular processes, having or not having a tree as underlying structure, etc.) that gives a grasp of the enormous flexibility of this framework. Links with biology and Petri nets are constant through this book. This book aims also to inspire research. This book gives suggestions for research of various levels of difficulty and this book clearly indicates their importance and the relevance of the possible outcomes. Readers new to this field of research will find the provided examples particularly useful in the understanding of the treated topics.

This book presents a view of the state of the art in multidimensional hyperbolic partial differential equations, with a particular emphasis on problems in which modern tools of analysis have proved useful. Ordered in sections of gradually increasing degrees of difficulty, the text first covers linear Cauchy problems and linear initial boundary value problems, before moving on to nonlinear problems, including shock waves. The book finishes with a discussion of the application of hyperbolic PDEs to gas dynamics, culminating with the shock wave analysis for real fluids.

Modern complex largescale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear largescale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Largescale dynamical systems are strongly interconnected and consist of interacting subsystems exchanging matter, energy, or information with the environment. The sheer size, or dimensionality, of these systems necessitates decentralized analysis and control system synthesis methods for their analysis and design. Written in a theoremproof format with examples to illustrate new concepts, this book addresses continuoustime, discretetime, and hybrid largescale systems. It develops finitetime stability and finitetime decentralized stabilization, thermodynamic modeling, maximum entropy control, and energybased decentralized control. This book will interest applied mathematicians, dynamical systems theorists, control theorists, and engineers, and anyone seeking a fundamental and comprehensive understanding of largescale interconnected dynamical systems and control.

General Relativity has passed all experimental and observational tests to model the motion of isolated bodies with strong gravitational fields, though the mathematical and numerical study of these motions is still in its infancy. It is believed that General Relativity models our cosmos, with a manifold of dimensions possibly greater than four and debatable topology opening a vast field of investigation for mathematicians and physicists alike. Remarkable conjectures have been proposed, many results have been obtained but many fundamental questions remain open. This book overviews the basic ideas in General Relativity, introduces the necessary mathematics and discusses some of the key open questions in the field.

Scientific rigor and critical thinking skills are indispensable in this age of big data because machine learning and artificial intelligence are often led astray by meaningless patterns. The 9 Pitfalls of Data Science is loaded with entertaining realworld examples of both successful and misguided approaches to interpreting data, both grand successes and epic failures. Anyone can learn to distinguish between good data science and nonsense. We are confident that readers will learn how to avoid being duped by data, and make better, more informed decisions. Whether they want to be effective creators, interpreters, or users of data, they need to know the nine pitfalls of data science.

Communicating with Data: The Art of Writing for Data Science aims to help students and researchers write about their data insights in a way that is both compelling and faithful to the data. This book aims to be both a resource for students who want to learn how to write about scientific findings both formally and for broader audiences and a textbook for instructors who are teaching science communication. In addition, a researcher who is looking for help with writing can use this book to selftrain. The book consists of five parts. Part I helps the novice learn to write by reading the work of others. Part II delves into the specifics of how to describe data at a level appropriate for publication, create informative and effective visualizations, and communicate an analysis pipeline through wellwritten, reproducible code. Part III demonstrates how to reduce a data analysis to a compelling story and organize and write the first draft of a technical paper. Part IV addresses revision; this includes advice on writing about statistical findings in a clear and accurate way, general writing advice, and strategies for proofreading and revising. Finally, Part V gives advice about communication strategies beyond the witten page, which includes giving talks, building a professional network, and participating in online communities. This part also contains 22 “portfolio prompts” aimed at building upon the guidance and examples in the earlier parts of the book and building a writer’s portfolio of data communication.