Differential & integral equations involve important mathematical techniques, & as such will be encountered by mathematicians, & physical & social scientists, in their undergraduate courses. This text provides a clear, comprehensive guide to first- & second- order ordinary & partial differential equations.
Mathematics by Andrei D. Polyanin,Alexander V. Manzhirov
Unparalleled in scope compared to the literature currently available, the Handbook of Integral Equations, Second Edition contains over 2,500 integral equations with solutions as well as analytical and numerical methods for solving linear and nonlinear equations. It explores Volterra, Fredholm, Wiener–Hopf, Hammerstein, Uryson, and other equations that arise in mathematics, physics, engineering, the sciences, and economics. With 300 additional pages, this edition covers much more material than its predecessor. New to the Second Edition • New material on Volterra, Fredholm, singular, hypersingular, dual, and nonlinear integral equations, integral transforms, and special functions • More than 400 new equations with exact solutions • New chapters on mixed multidimensional equations and methods of integral equations for ODEs and PDEs • Additional examples for illustrative purposes To accommodate different mathematical backgrounds, the authors avoid wherever possible the use of special terminology, outline some of the methods in a schematic, simplified manner, and arrange the material in increasing order of complexity. The book can be used as a database of test problems for numerical and approximate methods for solving linear and nonlinear integral equations.
The book contains seven survey papers about ordinary differential equations. The common feature of all papers consists in the fact that nonlinear equations are focused on. This reflects the situation in modern mathematical modelling - nonlinear mathematical models are more realistic and describe the real world problems more accurately. The implications are that new methods and approaches have to be looked for, developed and adopted in order to understand and solve nonlinear ordinary differential equations. The purpose of this volume is to inform the mathematical community and also other scientists interested in and using the mathematical apparatus of ordinary differential equations, about some of these methods and possible applications.
Generality is a key value in scientific discourses and practices. Throughout history, it has received a variety of meanings and of uses. This collection of original essays aims to inquire into this diversity. Through case studies taken from the history of mathematics, physics and the life sciences, the book provides evidence of different ways of understanding the general in various contexts. It aims at showing how collectives have valued generality and how they have worked with specific types of "general" entities, procedures, and arguments. The books connects history and philosophy of mathematics and the sciences at the intersection of two of the most fruitful contemporary lines of research: historical epistemology, in which values (e.g. "objectivity," "accuracy") are studied from a historical viewpoint; and the philosophy of scientific practice, in which conceptual developments are seen as embedded in networks of social, instrumental, and textual practices. Each chapter provides a self-contained case-study, with a clear exposition of the scientific content at stake. The collection covers a wide range of scientific domains - with an emphasis on mathematics - and historical periods. It thus allows a comparative perspective which suggests a non-linear pattern for a history of generality. The introductory chapter spells out the key issues and points to the connections between the chapters.
Collating papers from a number of internationally renowned mathematicians, this book surveys both the current theory and the main areas of application of Heun's equation. This crops up in a wide variety of problems in applied mathematics, such as integral equations of potential theory, wave propagation, electrostatic oscillation, and Schrodinger's equation. This major collection will be of interest specifically for those researchers in non-linear Hamiltoniansystems, as well as those working in mathematical biology.
Psychology by Jerome R. Busemeyer,Zheng Wang,Ami Eidels,James T. Townsend
Author: Jerome R. Busemeyer,Zheng Wang,Ami Eidels,James T. Townsend
Publisher: Oxford University Press, USA
This Oxford Handbook offers a comprehensive and authoritative review of important developments in computational and mathematical psychology. With chapters written by leading scientists across a variety of subdisciplines, it examines the field's influence on related research areas such as cognitive psychology, developmental psychology, clinical psychology, and neuroscience. The Handbook emphasizes examples and applications of the latest research, and will appeal to readers possessing various levels of modeling experience. The Oxford Handbook of Computational and mathematical Psychology covers the key developments in elementary cognitive mechanisms (signal detection, information processing, reinforcement learning), basic cognitive skills (perceptual judgment, categorization, episodic memory), higher-level cognition (Bayesian cognition, decision making, semantic memory, shape perception), modeling tools (Bayesian estimation and other new model comparison methods), and emerging new directions in computation and mathematical psychology (neurocognitive modeling, applications to clinical psychology, quantum cognition). The Handbook would make an ideal graduate-level textbook for courses in computational and mathematical psychology. Readers ranging from advanced undergraduates to experienced faculty members and researchers in virtually any area of psychology--including cognitive science and related social and behavioral sciences such as consumer behavior and communication--will find the text useful.
Business & Economics by Jeffrey Racine,Liangjun Su,Aman Ullah
This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. These data-driven models seek to replace the classical parametric models of the past, which were rigid and often linear. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures. They provide a balanced view of new developments in the modeling of cross-section, time series, panel, and spatial data. Topics of the volume include: the methodology of semiparametric models and special regressor methods; inverse, ill-posed, and well-posed problems; methodologies related to additive models; sieve regression, nonparametric and semiparametric regression, and the true error of competing approximate models; support vector machines and their modeling of default probability; series estimation of stochastic processes and their application in Econometrics; identification, estimation, and specification problems in semilinear time series models; nonparametric and semiparametric techniques applied to nonstationary or near nonstationary variables; the estimation of a set of regression equations; and a new approach to the analysis of nonparametric models with exogenous treatment assignment.
Since the birth of Econometrics almost eight decades ago, theoretical and applied Econometrics and Statistics has, for the most part, proceeded along ‘Classical lines which typically invokes the use of rigid user-specified parametric models, often linear. However, during the past three decades a growing awareness has emerged that results based on poorly specified parametric models could lead to misleading policy and forecasting results. In light of this, around three decades ago the subject of nonparametric Econometrics and nonparametric Statistics emerged as a field with the defining feature that models can be ‘data-driven’—hence tailored to the data set at hand. Many of these approaches are described in the books by Prakasa Rao (1983), Härdle (1990), Fan and Gijbels (1996), Pagan and Ullah (1999), Yatchew (2003), Li and Racine (2007), and Horowitz (2009), and they appear in a wide range of journal outlets. The recognition of the importance of this subject along with advances in computer technology has fueled research in this area, and the literature continues to increase at an exponential rate. This pace of innovation makes it difficult for specialists and nonspecialists alike to keep abreast of recent developments. There is no single source available for those seeking an informed overview of these developments. This handbook contains chapters that cover recent advances and major themes in the nonparametric and semiparametric domain. The chapters contained herein provide an up-to-date reference source for students and researchers who require definitive discussions of the cutting-edge developments in applied Econometrics and Statistics. Contributors have been chosen on the basis of their expertise, their international reputation, and their experience in exposing new and technical material. This handbook highlights the interface between econometric and statistical methods for nonparametric and semiparametric procedures; it is comprised of new, previously unpublished research papers/chapters by leading international econometricians and statisticians. This handbook provides a balanced viewpoint of recent developments in applied sciences with chapters covering advances in methodology, inverse problems, additive models, model selection and averaging, time series, and cross-section analysis. Methodology Semi-nonparametric (SNP) models are models where only a part of the model is parameterized, and the nonspecified part is an unknown function that is represented by an infinite series expansion. SNP models are, in essence, models with infinitelymany parameters. In Chapter 1, Herman J. Bierens shows how orthonormal functions can be constructed along with how to construct general series representations of density and distribution functions in a SNP framework. Bierens reviews the necessaryHilbert space theory involved as well. The term ‘special regressor’ originates in Lewbel (1998) and has been employed in a wide variety of limited dependent variable models including binary, ordered, and multinomial choice as well as censored regression, selection, and treatment models and truncated regression models, among others (a special regressor is an observed covariate with properties that facilitate identification and estimation of a latent variable model). In Chapter 2, Arthur Lewbel provides necessary background for understanding how and why special regressor methods work, and he details their application to identification and estimation of latent variable moments and parameters. Inverse Problems Ill-posed problems surface in a range of econometric models (a problem is ‘well-posed’ if its solution exists, is unique, and is stable, while it is ‘ill-posed’ if any of these conditions are violated). In Chapter 3, Marine Carrasco, Jean-Pierre Florens and Eric Renault study the estimation of a function ϕ in linear inverse problems of the form Tϕ = r, where r is only observed with error and T may be given or estimated. Four examples are relevant for Econometrics, namely, (i) density estimation, (ii) deconvolution problems, (iii) linear regression with an infinite number of possibly endogenous explanatory variables, and (iv) nonparametric instrumental variables estimation. In the first two cases T is given, whereas it is estimated in the two other cases, respectively at a parametric or nonparametric rate. This chapter reviews somemain results for these models such as concepts of degree of ill-posedness, regularity of ϕ, regularized estimation, and the rates of convergence typically obtained. Asymptotic normality results of the regularized solution ˆϕα are obtained and can be used to construct (asymptotic) tests on ϕ. In Chapter 4, Victoria Zinde-Walsh provides a nonparametric analysis for several classes of models, with cases such as classical measurement error, regression with errors in variables, and other models that may be represented in a form involving convolution equations. The focus here is on conditions for existence of solutions, nonparametric identification, and well-posedness in the space of generalized functions (tempered distributions). This space provides advantages over working in function spaces by relaxing assumptions and extending the results to include a wider variety of models, for example by not requiring existence of and underlying density. Classes of (generalized) functions for which solutions exist are defined; identification conditions, partial identification, and its implications are discussed. Conditions for well-posedness are given, and the related issues of plug-in estimation and regularization are examined. Additive semiparametric models are frequently adopted in applied settings to mitigate the curse of dimensionality. They have proven to be extremely popular and tend to be simpler to interpret than fully nonparametric models. In Chapter 5, Joel L. Horowitz considers estimation of nonparametric additive models. The author describes methods for estimating standard additive models along with additive models with a known or unknown link function. Tests of additivity are reviewed along with an empirical example that illustrates the use of additive models in practice. In Chapter 6, Shujie Ma and Lijian Yang present an overview of additive regression where the models are fit by spline-backfitted kernel smoothing (SBK), and they focus on improvements relative to existing methods (i.e., Linton (1997)). The SBK estimation method has several advantages compared to most existing methods. First, as pointed out in Sperlich et al. (2002), the estimator of Linton (1997) mixed up different projections, making it uninterpretable if the real data generating process deviates from additivity, while the projections in both steps of the SBK estimator are with respect to the same measure. Second, the SBK method is computationally expedient, since the pilot spline estimator is much faster computationally than the pilot kernel estimator proposed in Linton (1997). Third, the SBK estimator is shown to be as efficient as the “oracle smoother” uniformly over any compact range, whereas Linton (1997) proved such ‘oracle efficiency’ only at a single point. Moreover, the regularity conditions needed by the SBK estimation procedure are natural and appealing and close to being minimal. In contrast, higher-order smoothness is needed with growing dimensionality of the regressors in Linton and Nielsen (1995). Stronger and more obscure conditions are assumed for the two-stage estimation proposed by Horowitz andMammen (2004). In Chapter 7, Enno Mammen, Byeong U. Park and Melanie Schienle give an overview of smooth backfitting estimators in additive models. They illustrate their wide applicability in models closely related to additive models such as (i) nonparametric regression with dependent errors where the errors can be transformed to white noise by a linear transformation, (ii) nonparametric regression with repeatedly measured data, (iii) nonparametric panels with fixed effects, (iv) simultaneous nonparametric equation models, and (v) non- and semiparametric autoregression and GARCH-models. They review extensions to varying coefficient models, additive models with missing observations, and the case of nonstationary covariates. Model Selection and Averaging “Sieve estimators” are a class of nonparametric estimator where model complexity increases with the sample size. In Chapter 8, Bruce Hansen considers “model selection” and “model averaging” of nonparametric sieve regression estimators. The concepts of series and sieve approximations are reviewed along with least squares estimates of sieve approximations and measurement of estimator accuracy by integrated mean-squared error (IMSE). The author demonstrates that the critical issue in applications is selection of the order of the sieve, because the IMSE greatly varies across the choice. The author adopts the cross-validation criterion as an estimator of mean-squared forecast error and IMSE. The author extends existing optimality theory by showing that cross-validation selection is asymptotically IMSE equivalent to the infeasible best sieve approximation, introduces weighted averages of sieve regression estimators, and demonstrates how averaging estimators have lower IMSE than selection estimators. In Chapter 9, Liangjun Su and Yonghui Zhang review the literature on variable selection in nonparametric and semiparametric regression models via shrinkage. The survey includes simultaneous variable selection and estimation through the methods of least absolute shrinkage and selection operator (Lasso), smoothly clipped absolute deviation (SCAD), or their variants, with attention restricted to nonparametric and semiparametric regression models. In particular, the author considers variable selection in additive models, partially linear models, functional/varying coefficient models, single index models, general nonparametric regression models, and semiparametric/nonparametric quantile regression models. In Chapter 10, Jeffrey S. Racine and Christopher F. Parmeter propose a data-driven approach for testing whether or not two competing approximatemodels are equivalent in terms of their expected true error (i.e., their expected performance on unseen data drawn from the same DGP). The test they consider is applicable in cross-sectional and time-series settings, furthermore, in time-series settings their method overcomes two of the drawbacks associated with dominant approaches, namely, their reliance on only one split of the data and the need to have a sufficiently large ‘hold-out’ sample for these tests to possess adequate power. They assess the finite-sample performance of the test via Monte Carlo simulation and consider a number of empirical applications that highlight the utility of the approach. Default probability (the probability that a borrower will fail to serve its obligation) is central to the study of risk management. Bonds and other tradable debt instruments are the main source of default for most individual and institutional investors. In contrast, loans are the largest and most obvious source of default for banks. Default prediction is becoming more and more important for banks, especially in risk management, in order to measure their clients degree of risk. In Chapter 11, Wolfgang Härdle, Dedy Dwi Prastyo and Christian Hafner consider the use of Support Vector Machines (SVM) for modeling default probability. SVM is a state-of-the-art nonlinear classification technique that is well-suited to the study of default risk. This chapter emphasizes SVM-based default prediction applied to the CreditReform database. The SVM parameters are optimized by using an evolutionary algorithm (the so-called “Genetic Algorithm”) and show how the “imbalanced problem” may be overcome by the use of “down-sampling” and “oversampling.” In Chapter 12, Peter C. B. Phillips and Zhipeng Liao consider an overview of recent developments in series estimation of stochastic processes and some of their applications in Econometrics. They emphasize the idea that a stochastic process may, under certain conditions, be represented in terms of a set of orthonormal basis functions, giving a series representation that involves deterministic functions. Several applications of this series approximation method are discussed. The first shows how a continuous function can be approximated by a linear combination of Brownian motions (BMs), which is useful in the study of spurious regression. The second application utilizes the series representation of BM to investigate the effect of the presence of deterministic trends in a regression on traditional unit-root tests. The third uses basis functions in the series approximation as instrumental variables to perform efficient estimation of the parameters in cointegrated systems. The fourth application proposes alternative estimators of long-run variances in some econometric models with dependent data, thereby providing autocorrelation robust inferencemethods in thesemodels. The authors review work related to these applications and ongoing research involving series approximation methods. In Chapter 13, Jiti Gao considers some identification, estimation, and specification problems in a class of semilinear time series models. Existing studies for the stationary time series case are reviewed and discussed, and Gao also establishes some new results for the integrated time series case. The author also proposes a new estimation method and establishes a new theory for a class of semilinear nonstationary autoregressive models. Nonparametric and semiparametric estimation and hypothesis testing methods have been intensively studied for cross-sectional independent data and weakly dependent time series data. However, many important macroeconomics and financial data are found to exhibit stochastic and/or deterministic trends, and the trends can be nonlinear in nature. While a linear model may provide a decent approximation to a nonlinear model for weakly dependent data, the linearization can result in severely biased approximation to a nonlinear model with nonstationary data. In Chapter 14, Yiguo Sun and Qi Li review some recent theoretical developments in nonparametric and semiparametric techniques applied to nonstationary or near nonstationary variables. First, this chapter reviews some of the existing works on extending the I(0), I(1), and cointegrating relation concepts defined in a linear model to a nonlinear framework, and it points out some difficulties in providing satisfactory answers to extend the concepts of I(0), I(1), and cointegration to nonlinear models with persistent time series data. Second, the chapter reviews kernel estimation and hypothesis testing for nonparametric and semiparametric autoregressive and cointegrating models to explore unknown nonlinear relations among I(1) or near I(1) process(es). The asymptotic mixed normal results of kernel estimation generally replace asymptotic normality.
Business & Economics by Lucas Bernard,Willi Semmler
The first World Climate Conference, which was sponsored by the World Meteorological Organization in Gen?ve in 1979, triggered an international dialogue on global warming. From the 1997 United Nations-sponsored conference-during which the Kyoto Protocol was signed-through meetings in Copenhagen, Canc?n, Durban, and most recently Doha (2012) and Warsaw (2013), worldwide attention to the issue of global warming and its impact on the world's economy has rapidly increased in intensity. The consensus of these debates and discussions, however, is less than clear. Optimistically, many geoscience researchers and members of the Intergovernmental Panel on Climate Change (IPCC) have supported CO2 emission reduction pledges while maintaining that a 2?C limit in increased temperature by the year 2100 is achievable through international coordination. Other observers postulate that established CO2 reduction commitments such as those agreed to at the Copenhagen United Nations Climate Change Conference (2009) are insufficient and cannot hold the global warming increase below 2?C. As experts theorize on precisely what impact global warming will have, developing nations have become particularly alarmed. The developed world will use energy to mitigate global warming effects, but developing countries are more exposed by geography and poverty to the most dangerous consequences of a global temperature rise and lack the economic means to adapt. The complex dynamics that result from this confluence of science and geopolitics gives rise to even more complicated issues for economists, financial planners, business leaders, and policy-makers. The Oxford Handbook of the Macroeconomics of Global Warming analyzes the economic impact of issues related to and resulting from global warming, specifically the implications of possible preventative measures, various policy changes, and adaptation efforts as well as the different consequences climate change will have on both developing and developed nations. This multi-disciplinary approach, which touches on issues of growth, employment, and development, elucidates for readers state-of-the-art research on the complex and far-reaching problem of global warming.
Science by Stefan Fenyö,István Fenyő,Hans-Wolfgang Stolle
Author: Ilʹi͡a Nikolaevich Bronshteĭn,K. A. Semendi︠a︡ev
Publisher: Van Nostrand Reinhold Company
This is a modern American translation of the world-renowned German reference that has sold millions of copies in nineteen prior editions. Topics covered range from the elementary to advanced mathematics. Students and researchers in mathematics, engineering, physics, and other sciences will welcome this comprehensive volume.
Mathematics by J. R. Ockendon,Sam Howison,Andrew Lacey,Alexander Movchan
Author: J. R. Ockendon,Sam Howison,Andrew Lacey,Alexander Movchan
Publisher: Oxford University Press on Demand
Partial differential equations are used in mathematical models of a huge range of real-world phenomena, from electromagnetism to financial markets. This revised edition of Applied Partial Differential Equations contains many new sections and exercises including transform methods, free surface flows, linear elasticity and complex characteristics.
This book strives to provide a concise and yet comprehensive cover-age of all major mathematical methods in engineering. Topics in-clude advanced calculus, ordinary and partial differential equations, complex variables, vector and tensor analysis, calculus of variations, integral transforms, integral equations, numerical methods, and prob-ability and statistics. Application topics consist of linear elasticity, harmonic motions, chaos, and reaction-diffusion systems. . This book can serve as a textbook in engineering mathematics, mathematical modelling and scientific computing. This book is organised into 19 chapters. Chapters 1-14 introduce various mathematical methods, Chapters 15-18 concern the numeri-cal methods, and Chapter 19 introduces the probability and statistics.
Get Cutting-Edge Coverage of All Chemical Engineering Topics— from Fundamentals to the Latest Computer Applications. First published in 1934, Perry's Chemical Engineers' Handbook has equipped generations of engineers and chemists with an expert source of chemical engineering information and data. Now updated to reflect the latest technology and processes of the new millennium, the Eighth Edition of this classic guide provides unsurpassed coverage of every aspect of chemical engineering-from fundamental principles to chemical processes and equipment to new computer applications. Filled with over 700 detailed illustrations, the Eighth Edition of Perry's Chemcial Engineering Handbook features: Comprehensive tables and charts for unit conversion A greatly expanded section on physical and chemical data New to this edition: the latest advances in distillation, liquid-liquid extraction, reactor modeling, biological processes, biochemical and membrane separation processes, and chemical plant safety practices with accident case histories Inside This Updated Chemical Engineering Guide Conversion Factors and Mathematical Symbols • Physical and Chemical Data • Mathematics • Thermodynamics • Heat and Mass Transfer • Fluid and Particle Dynamics Reaction Kinetics • Process Control • Process Economics • Transport and Storage of Fluids • Heat Transfer Equipment • Psychrometry, Evaporative Cooling, and Solids Drying • Distillation • Gas Absorption and Gas-Liquid System Design • Liquid-Liquid Extraction Operations and Equipment • Adsorption and Ion Exchange • Gas-Solid Operations and Equipment • Liquid-Solid Operations and Equipment • Solid-Solid Operations and Equipment • Size Reduction and Size Enlargement • Handling of Bulk Solids and Packaging of Solids and Liquids • Alternative Separation Processes • And Many Other Topics!