Numerical methods for scientific and engineering computation. by M K Jain; S R K Iyengar; Rajendra K Jain. eBook: Document. English. New York, N.Y. Numerical Methods For Scientific And Engineering Computation. Front Cover · M.K. Jain. New Age International, - pages. 7 Reviews. Numerical Methods for Scientific and Engineering Computation. Front Cover. Mahinder Kumar Jain, S. R. K. Iyengar, Rajendra K. Jain. Wiley, - Analyse.
|Language:||English, German, Hindi|
|Genre:||Business & Career|
|ePub File Size:||24.59 MB|
|PDF File Size:||10.58 MB|
|Distribution:||Free* [*Sign up for free]|
Numerical Methods for Scientists and Engineers ebook by Richard Hamming an introductory chapter on numerical methods and their relevance to computing, . ETH Lecture L Numerical Methods for CSE. Numerical Methods for. Computational Science and Engineering. Prof. R. Hiptmair. Editorial Reviews. About the Author. Richard W. Hamming: The Computer Icon Richard W. site Store · site eBooks · Science & Math . In an introductory chapter on numerical methods and their relevance to computing, well-known.
Strings are created by enclosing the characters between single quotes. They are concatenated with the function strcat, whereas a colon operator: This can be done by preceding the operator with a period. If the condition is false, the block skipped. The if conditional can be followed by any number of elseif constructs: The else clause.. The function signum below illustrates the use of the conditionals.
For instance, if the value of expression is equal to value2, the block of statements following case value2 is executed. If the value of expression does not match any of the case values, the control passes to the optional otherwise block. Here is an example: After execution of the block, condition is evaluated again. If it is still true, the block is executed again. This process is continued until the condition becomes false.
In the fol- lowing example the function buildvec constructs a row vector of arbitrary length by prompting for its elements. The process is terminated when an empty element is encountered.
As an illustration, consider the following function that strips all the blanks from the string s1: However, the function can be forced to exit with the return command. The procedure is then terminated with the return statement. The for loop assures that the number of iterations does not exceed 30, which should be more than enough for convergence. The number of arguments may be zero. If there is only one output argument, the enclosing brackets may be omitted.
The number of input and output arguments used in the function call can be determined by the functions nargin and nargout, respectively. The error tolerance epsilon is an optional input that may be used to override the default value 1. The output argument numIter, which contains the number of iterations, may also be omitted from the function call. If myfunc is replaced with another function name, solve will not work unless the corresponding change is made in its code. In general, it is not a good idea to alter computer code that has been tested and debugged; all data should be communicated to a function through its arguments.
MATLAB makes this possible by passing the function handle of myfunc to solve as an argument, as illustrated below. Hence the variable func in solve contains the handle to myfunc. In-Line Functions If the function is not overly complicated, it can also be represented as an inline object: If the input is an expression, it is evalu- ated and returned in value.
The following two samples illustrate the use of input: Line break is forced by the newline character. The following example prints a formatted table of sin x vs. When the fun- ction is called with a single argument, e. Here are a few basic functions: If called with two input arguments: If x is a matrix, then a is a row vector containing the products over each column. The command window is always in the interactive mode, so that any statement entered into the window is immediately processed. The interactive mode is a good way to experiment with the language and try out programming ideas.
One can also create the P-code of a function and save it on disk by issuing the command pcode function name MATLAB will then load the P-code which has the. Listing of the saved variables can be displayed by the command who.
If greater detail about the variables is required, type whos. Variables can be cleared from the workspace with the command clear a b.
If the list of variables is omitted, all variables are cleared. Here we illustrate some basic commands for two-dimensional plots. The example below plots sin x and cos x on the same plot. This resulted in plots more suited for publication.
It is by far the longest and arguably the most important topic in the book. There is a good reason for this—it is almost impossible to carry out numerical analysis of any sort without encountering simultaneous equations.
Moreover, equation sets arising from physical problems are often very large, consuming a lot of computa- tional resources. We cannot possibly discuss all the special algorithms in the limited space avail- able. The rows and columns of a nonsingular matrix are linearly independent in the sense that no row or column is a linear combination of other rows or columns.
Ill-Conditioning An obvious question is: Note that the condition number is not unique, but depends on the choice of the matrix norm. Unfortunately, the condition number is expensive to compute for large matri- ces. Note that a 0. This in turn introduces large errors into the solution, the magnitude of which depends on the severity of ill-conditioning. This can be done during or after the solution with only a small computational effort.
Linear Systems Linear, algebraic equations occur in almost all branches of numerical analysis. But their most visible application in engineering is in the analysis of linear systems any system whose response is proportional to the input is deemed to be linear.
If the system is discrete, such as a truss or an electric circuit, then its analysis leads directly to linear algebraic equations. In the case of a statically determinate truss, for example, the equations arise when the equilibrium conditions of the joints are written down.
The unknowns x1 , x 2 ,. The behavior of continuous systems is described by differential equations, rather than algebraic equations. In other words, if the input is changed, the equations have to be solved again with a different b, but the same A. Therefore, it is desirable to have an equation-solving algorithm that can handle any number of constant vectors with minimal computational effort. Methods of Solution There are two classes of methods for solving systems of linear, algebraic equations: The transformation is carried out by applying the three operations listed below.
Overview of Direct Methods Table 2. A square matrix is called triangular if it contains only zero elements on one side of the leading diagonal. The solution would thus proceed as follows: This procedure is known as forward substitution. It consists of two parts: As indicated in Table 2.
The equations are then solved by back substitution. The symbolic representation of this operation is Eq. We start the elimination by taking Eq.
Now we pick b as the pivot equation and eliminate x 2 from c: The original equations have been replaced by equivalent equations that can be easily solved by back substitution.
This is rather fortunate, since the determinant of a triangular matrix is very easy to compute—it is the product of the diagonal elements you can verify this quite easily. Solving Eqs. Therefore, the current pivot equation is the kth equation, and all the equations below it are still to be transformed. The same applies to the components of the constant vector b.
Introduction to Precise Numerical Methods
The algorithm for the elimination phase now almost writes itself: Therefore, Aik is not re- placed by zero, but retains its original value. Dur- ing back substitution b is overwritten by the solution vector x, so that b contains the solution upon exit. Let there be m such constant vectors, denoted by b1 , b2 ,. The solutions are then obtained by back substitution in the usual manner, one vector at a time. It would quite easy to make the corresponding changes in gauss.
However, the LU decomposition method, described in the next article, is more versatile in handling multiple constant vectors. Solution We used the program shown below. After constructing A and b, the output format was changed to long so that the solution would be printed to 14 decimal places. Here are the results: LU decomposition is not unique the combinations of L and U for a prescribed A are endless , unless certain constraints are placed on L or U.
These constraints distinguish one type of decomposition from another. Three commonly used decompositions are listed in Table 2. The cost of each additional solution is relatively small, since the forward and back substitution operations are much less time consuming than the decomposition process.
The diagonal elements of L do not have to be stored, since it is understood that each of them is unity. The contents of b are replaced by y during forward substitution. Similarly, back substitution overwrites y with the solution x.
We study it here because it is invaluable in certain other applications e.
By solving these equations in a certain order, it is possible to have only one unknown in each equation. Consider the lower triangular portion of each matrix in Eq. Taking the term containing L i j outside the summation in Eq. Therefore, once L i j has been computed, Ai j is no longer needed.
This makes it possible to write the elements of L over the lower triangular portion of A as they are computed. The elements above the principal diagonal of A will remain untouched. If a negative L 2j j is encountered during decomposition, an error message is printed and the program is terminated.
Substituting the given matrix for A in Eq. Then LUsol is used to compute the solution one vector at a time. By evaluating the determinant, classify the following matrices as singular, ill- conditioned or well-conditioned. If all the nonzero terms are clus- tered about the leading diagonal, then the matrix is said to be banded.
All the elements lying outside the band are zero. The matrix shown above has a bandwidth of three, since there are at most three nonzero elements in each row or column. Such a matrix is called tridiagonal. The original vectors c and d are destroyed and replaced by the vectors of the decomposed matrix. The vector y overwrites the constant vector b during the forward substitution.
Similarly, the solution vector x replaces y in the back substitution process. Thus Gauss elimination, which results in an upper triangular matrix of the form shown in Eq. There is an alternative storage scheme that can be employed during LU decom- position.
If elimination has progressed to the stage where the kth row has become the pivot row, we have the following situation: The original vectors d, e and f are destroyed and replaced by the vectors of the decomposed matrix.
As in LUsol3, the vector y over- writes the constant vector b during forward substitution and x replaces y during back substitution. However, Gauss elimination fails immediately due to the presence of the zero pivot element the element A The above example demonstrates that it is sometimes essential to reorder the equations during the elimination phase. The reordering, or row pivoting, is also re- quired if the pivot element is not zero, but very small in comparison to other elements in the pivot row, as demonstrated by the following set of equations: This is the principle behind scaled row pivoting, discussed next.
The vector s can be obtained with the following algorithm: Note that the corresponding row interchange must also be carried out in the scale factor array s. Apart from row swapping, the elimination and solution phases are identical to those of function gauss in Art.
The most important of these is keeping a record of the row inter- changes during the decomposition phase. In LUdecPiv this record is kept in the permutation array perm, initially set to [1, 2,. Whenever two rows are inter- changed, the corresponding interchange is also carried out in perm. Thus perm shows how the original rows were permuted.
This information is then passed to the function LUsolPiv, which rearranges the elements of the constant vector in the same order before carrying out forward and back substitutions. There are no infallible rules for determining when pivoting should be used. And we should not forget that pivoting is not the only means of controlling roundoff errors—there is also double precision arithmetic. It should be strongly emphasized that the above rules of thumb are only meant for equations that stem from real engineering problems.
Therefore, it is excluded from further consideration. As r32 is larger than r22 , the third row is the better pivot row. It should be noted that U is the matrix that would result in the LU decomposition of the following row-wise permutation of A the ordering of rows is the same as achieved by pivoting: Alternate Solution It it not necessary to physically exchange equations during piv- oting.
The elimination would then proceed as follows for the sake of brevity, we skip repeating the details of choosing the pivot equation: In hand computations this is not a problem, because we can determine the order by inspection.
The contents of p indicate the order in which the pivot rows were chosen. The equations are solved by back substitution in the reverse order: By dispensing with swapping of equations, the scheme outlined above would probably result in a faster and more complex algorithm than gaussPiv, but the number of equations would have to be quite large before the difference becomes noticeable.
The spring stiffnesses are denoted by ki , the weights of the masses are Wi , and xi are the displacements of the masses measured from the positions where the springs are undeformed. Write a program that solves these equations, given k and W. The differences are: For the statically determinate truss shown, the equilibrium equations of the joints are: Write a program that solves these equations for any given n pivoting is recommended. The proof is simple: Inversion of large matrices should be avoided whenever possible due its high cost.
As seen from Eq. If LU decomposition is employed in the solution, the solution phase forward and back substitution must be repeated n times, once for each bi.
However, the inverse of a triangular matrix remains triangular. Iterative, or indirect methods, start with an initial guess of the solution x and then repeatedly improve the solution until the change in x becomes negligible. Since the required number of iterations can be very large, the indirect methods are, in general, slower than their direct counterparts.
However, iterative methods do have the following advantages that make them attractive for certain problems: This makes it possible to deal with very large matrices that are sparse, but not neces- sarily banded.
Iterative procedures are self-correcting, meaning that roundoff errors or even arithmetic mistakes in one iterative cycle are corrected in subsequent cycles.
A serious drawback of iterative methods is that they do not always converge to the solution. The initial guess for x plays no role in determining whether convergence takes place—if the procedure converges for one starting vector, it would do so for any starting vector. The initial guess affects only the number of iterations that are required for convergence. If a good guess for the solution is not available, x can be chosen randomly. Equation 2. This completes one iteration cycle.
Convergence of the Gauss—Seidel method can be improved by a technique known as relaxation. The idea is to take the new value of xi as a weighted average of its previous value and the value predicted by Eq. This is called underrelaxation. The user must provide the function iterEqs that computes the improved x from the iterative formulas in Eq. The resulting procedure is known as the method of steepest descent.
It is not a popular algorithm due to slow convergence.
Now suppose that we have carried out enough iterations to have computed the whole set of n residual vectors. It thus appears that the conjugate gradient algorithm is not an iterative method at all, since it reaches the exact solution after n compu- tational cycles. In practice, however, convergence is usually achieved in less than n iterations. The conjugate gradient method is not competitive with direct methods in the solution of small sets of equations. Its strength lies in the handling of large, sparse systems where most elements of A are zero.
It is important to note that A enters the algorithm only through its multiplication by a vector; i. The maximum allowable number of iterations is set to n. This function must be supplied by the user see Example 2.
We must also supply the starting vector x and the constant right-hand-side vector b. Solution The conjugate gradient method should converge after three iterations. The small discrepancy is caused by roundoff errors in the computations. Solution In this case the iterative formulas in Eq. The solution vector x is initialized to zero in the program, which also sets up the constant vector b. Invert the following matrices: If Eq. The inversion procedure should contain only forward substitution.
Solve the following equations with the Gauss—Seidel method: If the equa- tions are overdetermined A has more rows than columns , the least-squares solution is computed.
On return, U is an upper trian- gular matrix and L contains a row-wise permutation of the lower triangular matrix.
A banded matrix in sparse form can be created by the following command: The columns of B may be longer than the diagonals they represent. A diagonal in the upper part of A takes its elements from lower part of a column of B, while a lower diagonal uses the upper part of B. The printout of a sparse matrix displays the values of these elements and their indices row and column numbers in parentheses.
Almost all matrix functions, including the ones listed above, also work on sparse matrices.
The source of the data may be ex- perimental observations or numerical computations. In interpolation we construct a curve through the data points. In doing so, we make the implicit assumption that the data points are accurate and distinct. Thus the curve does not have to hit the data points.
This property is illustrated in Fig. Example of quadratic cardinal functions. It is instructive to note that the farther a data point is from x, the more it contributes to the error at x. Each pass through the for-loop generates the entries in the next column, which overwrite the corresponding elements of a.
Therefore, a ends up con- taining the diagonal terms of Table 3.
This works well if the interpolation is carried out repeatedly at different values of x using the same polynomial. Each pass through the for- loop computes the terms in next column of the table, which overwrite the previous elements of y.
At the end of the procedure, y contains the diagonal terms of the table. Three to six nearest-neighbor points produce good results in most cases. An interpolant intersecting more than six points must be viewed with suspicion. The reason is that the data points that are far from the point of interest do not contribute to the accuracy of the interpolant. In fact, they can be detrimental. The danger of using too many points is illustrated in Fig.
There are 11 equally spaced data points represented by the circles.
The solid line is the interpolant, a poly- nomial of degree ten, that intersects all the points. A much smoother result would be obtained by using a cubic interpolant spanning four nearest-neighbor points. Polynomial interpolant displaying oscillations. As an example, consider Fig. There are six data points, shown as circles.
Extrapolation may not follow the trend of data. If extrapolation cannot be avoided, the following two measures can be useful: A linear or quadratic interpolant, for example, would yield a reasonable estimate of y 14 for the data in Fig.
Frequently this plot is almost a straight line. This is illustrated in Fig. Logarithmic plot of the data in Fig. Determine the degree of this polynomial by constructing the divided difference table, similar to Table 3. Hence the polynomial is a cubic. Solution This is an example of inverse interpolation, where the roles of x and y are interchanged.
Employing the format of Table 3. Elastic strip y Figure 3. Mechanical model of natural cubic spline. Pins data points x The mechanical model of a cubic spline is shown in Fig. It is a thin, elastic strip that is attached with pins to the data points. At the pins, the slope and bending moment and hence the second derivative are continuous. There is no bending mo- ment at the two end pins; hence the second derivative of the spline is zero at the end points.
Since these end conditions occur naturally in the beam model, the resulting curve is known as the natural cubic spline. The pins, i. Cubic spline. The last two terms in Eq. This task is carried out by the function splineCurv: It returns the segment number; that is, the value of the subscript i in Eq.
The second derivatives at the other knots are obtained from Eq. The corresponding interpolant is obtained from Eq. The interpolant can now be evaluated from Eq. The program must be able to evaluate the interpolant for more than one value of x.
Find the zero of y x from the following data: The function y x represented by the data in Prob. Given the data x 0 0. Use the method that you consider to be most con- venient. Compute the zero of the function y x from the following data: Solve Example 3. Black, Z. Kreith, F. Deter- mine the relative density of air at The form of f x is determined beforehand, usually from the theory associated with the experiment from which the data is obtained.
This brings us to the question: The function S to be minimized is thus the sum of the squares of the residuals. Equations 3. In that case, both the numerator and the denominator in Eq. Substitution into Eq. The normal equations become progressively ill-conditioned with increasing m.
Polynomials of high order are not recommended, because they tend to reproduce the noise inherent in the data. The polynomial evaluation in stdDev is carried out by the subfunction polyEval which is described in Art.
For example, the instrument taking the measurements may be more sensitive in a certain range of data. Sometimes the data represent the results of several experiments, each carried out under different circumstances.
We note from Eq. Compute the standard deviation in each case.
Numerical Methods for Scientists and Engineers
Following the steps in Example 3. From Eqs. As expected, this result is somewhat different from that obtained in Part 1. The computations of the residuals and standard deviation are as follows: Three tensile tests were carried out on an aluminum bar. In each test the strain was measured at the same values of stress. The results were Stress MPa Solve Prob. The results were: This problem was solved by interpolation in Prob.
This problem was solved in Prob. The table shows the variation of the relative thermal conductivity k of sodium with temperature T. Singer, C. Knowing that radioactivity decays exponentially with time: If x is an array, y is computed for all elements of x. If x is a matrix, s is computed for each column of x. If x is a matrix, xbar is computed for each column of x. Before proceeding further, it might be helpful to review the concept of a function.
In numerical computing the rule is invariably a computer algorithm. The roots of equations may be real or complex. Complex zeroes of polynomials are treated near the end of this chapter. There is no universal recipe for estimating the value of a root. If the equation is associated with a physical problem, then the context of the problem physical insight might suggest the approximate location of the root.
Otherwise, the function must be plotted, or a systematic numerical search for the roots can be carried out. One such search method is described in the next article.
Prior bracketing is, in fact, mandatory in the methods described in this chapter. Another useful tool for detecting and bracketing roots is the incremental search method. The basic idea behind the incremental search method is simple: If the interval is small enough, it is likely to contain a single root.
There are several potential problems with the incremental search method: However, these locations are not true zeroes, since the function does not cross the x-axis. Plot of tan x. The search starts at a and proceeds in steps dx toward b. Once a zero is detected, rootsearch returns its bounds x1,x2 to the calling program. This can be repeated as long as rootsearch detects a root.
This procedure yields the following results: This technique is also known as the interval halving method. Bisection is not the fastest method available for com- puting roots, but it is the most reliable. Once a root has been bracketed, bisection will always close in on it. The method of bisection uses the same principle as incremental search: Otherwise, the root lies in x1 , x3 , in which case x2 is replaced by x3.
In either case, the new interval x1 , x2 is half the size of the original interval. The number of bisections n required to reduce the interval to tol is computed from Eq. Solution The best way to implement the method is to use the table shown below. Note that the interval to be bisected is determined by the sign of f x , not its magnitude.
Utilize the functions rootsearch and bisect. Thus the input argument fex4 3 in rootsearch is a handle for the function fex4 3 listed below. In most problems the method is much faster than bisection alone, but it can become sluggish if the function is not smooth. Inverse quadratic iteration. These points allow us to carry out the next iteration of the root by inverse quadratic interpolation viewing x as a quadratic function of f.
If the result x of the interpolation falls inside the latest bracket as is the case in Figs. Otherwise, another round of bisection is applied. Relabeling points after an iteration. We have now recovered the orig- inal sequencing of points in Figs. First interpolation cycle Substituting the above values of x and f into the numer- ator of the quotient in Eq. Second interpolation cycle Applying the interpolation in Eq. Solution 2. The sensible approach is to avoid the potentially troublesome regions of the function by bracketing the root as tightly as possible from a visual inspection of the plot.
The Newton—Raphson formula can be derived from the Taylor series expansion of f x about x: Graphical interpretation of the Newton—Raphson f xi formula. The for- mula approximates f x by the straight line that is tangent to the curve at xi. The algorithm for the Newton—Raphson method is simple: Only the latest value of x has to be stored.
Here is the algorithm: Assume that the inflows Q01, Q03 and outflows Q44, Q55 are the same. Use conservation of flow to recompute the values for the other flows.
As indicated, the rate of transfer of chemicals through each pipe is equal to a flow rate Q, with units of cubic meters per second multiplied by the concentration of the reactor from which the flow originates c, with units of milligrams per cubic meter. If the system is at a steady state, the transfer into each reactor will balance the transfer out.
Develop mass-balance equations for the reactors and solve the three simultaneous linear algebraic equations for their concentrations. In such systems, a stream containing a weight fraction Yin of a chemical enters from the left at a mass flow rate of F1. Simultaneously, a solvent carrying a weight fraction Xin of the same chemical enters from the right at a flow rate of F2.
Advanced Calculus Demystified. David Bachman. Introduction to Integral Calculus. Differential Equations. Feynman Lectures Simplified 4A: Math for Physicists. Robert Piccioni. The Editors of REA. From Geometry to Topology.
Graham Flegg. The Penguin Dictionary of Mathematics. David Nelson. John W. Introduction to Finite Element Analysis. Mathematical Handbook for Scientists and Engineers. Theresa M. Modern Mathematics for the Engineer: Second Series.
Edwin F. Complex Variables and the Laplace Transform for Engineers. Wilbur R. Intuitive Concepts in Elementary Topology.
Foundations of Applied Mathematics. Michael D. Richard W. Multiple View Geometry in Computer Vision. Richard Hartley. Differential Geometry of Curves and Surfaces. Manfredo P. Introduction to Numerical Analysis. Foundation Mathematics for the Physical Sciences. A Comprehensive Course. Dan Pedoe. Morse Theory. Kevin P Knudson. Principles of Linear Algebra with Mathematica. Kenneth M. Modern Calculus and Analytic Geometry.
Linear Algebra. Richard C. Beginning Partial Differential Equations. Peter V. Difference Equations by Differential Equation Methods. Peter E. Christian Constanda.
Multirate and Wavelet Signal Processing. Bruce W. The Finite Element Method: Theory, Implementation, and Applications. Mats G. Two-Dimensional Calculus. Robert Osserman. Knots and Physics. Louis H Kauffman. Applied Mathematics.Kevin P Knudson. The second cycle takes us to P6 , which is the optimal point. What is the actual error?
Subsequent step sizes, determined from Eq. Thus the program listed below does much more work than necessary for the problem at hand.
- ISSB SCIENTIFIC GUIDE PDF
- WE CAN REMEMBER IT FOR YOU WHOLESALE EBOOK
- PDF APPLICATION FOR COMPUTER
- ORACLE FORM BUILDER PDF
- EFFORTLESS E-COMMERCE WITH PHP AND MYSQL PDF
- ANDHRA CHRISTIAN SONGS PDF
- BUKU PANDUAN PRAMUKA LENGKAP PDF
- QUIZ QUESTIONS PDF
- USING MULTIVARIATE STATISTICS EBOOK
- CDS SYLLABUS IN HINDI PDF
- IAS STUDY MATERIAL FOR GENERAL STUDIES PDF
- PDF HOW TO YOUTUBE VIDEOS
- THE VELVET RAGE EPUB
- DARBAR E DIL BY UMERA AHMED PDF
- MILOS CRNJANSKI SEOBE PDF