This article lists the 10 greatest algorithms of the twentieth century. I. 1946 Monte Carlo method [1946: John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los Alamos Scientific Laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method.] In 1946, three scientists John von Neumann, Stan Ulam and Nick Metropolis of the Las Amos National Laboratory in the United States jointly invented the Monte Carlo method. Its specific definition is: draw a square with a side length of one square on the square, and draw an irregular shape with chalk in the interior of the square. Now calculate the area of ​​the irregular figure, how to calculate the column? Monte Carlo (Monte Carlo) tells us to evenly spread N (N is a large natural number) of soybeans into the square, and then count how many soybeans are inside this irregular geometry, for example, there are M, then this The area of ​​the strange shape approximates M/N, and the larger N, the more accurate the calculated value. Here we have to assume that the beans are on a plane and there is no overlap between them. (Spearing soybeans is just a metaphor.) The Monte Carlo method can be used to approximate the pi: let the computer randomly generate two numbers between 0 and 1 at a time to see if the two real numbers are within the unit circle. Generate a series of random points, the number of points in the unit circle and the total number of points, the ratio of the inscribed circle area to the square area is PI: 4, and PI is the pi. The more random points are obtained (but even if 10 random squares of 9 are used, the result is only in the first 4 bits matching the pi), the closer the result is to the pi. Second, 1947 simplex method [1947: George Dantzig, at the RAND Corporation, creates the simplex method for linear programming.] In 1947, RAND Corporation, Grorge Dantzig, invented the simplex method. The simplex method has since become an important cornerstone of the linear programming discipline. The so-called linear programming, in simple terms, is to give a set of linear (all variables are first power) constraints (for example, a1*x1+b1*x2+c1*x3>0), find the pole of a given objective function value. It seems that this is too abstract, but it is not uncommon to be able to come in handy in reality - for example, for a company, its human and material resources that can be put into production are limited ("linear constraints"), and the company's goals Is the profit maximization ("target function takes the maximum value"), see, linear programming is not abstract! Linear programming, as part of operational research, has become an important tool in the field of management science. The simplex method proposed by Dantzig is an extremely effective method for solving linear programming problems. Third, 1950 Krylov subspace iteration method [1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods.] 1950: The American National Bureau of Standards Institute of Numerical Analysis, Magnus, Esteine ​​Stiefel and Cornelius' Lanczos, invented the Krylov subspace iteration method. The Krylov subspace iteration method is used to solve an equation of the form Ax=b, and A is an n*n matrix. When n is sufficiently large, the direct calculation becomes very Difficult, and the Krylov method subtly turns it into an iterative form of Kxi+1=Kxi+b-Axi. Here K (from the initials of the author Russian Nikolai Krylov surname) is a constructed matrix close to A, and the iterative form of the algorithm is that it simplifies complex problems into stages that are easy to calculate. Substep. Fourth, the decomposition method of 1951 matrix calculation [1951: Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations.] In 1951, Alston Householder of the Alston Oak Ridge National Laboratory proposed a decomposition method for matrix calculations. This algorithm proves that any matrix can be decomposed into triangles, diagonals, orthogonals and other special forms of matrix. The significance of this algorithm makes it possible to develop flexible matrix computing software packages. Five, 1957 optimized Fortran compiler [1957: John Backus leads a team at IBM in developing the Fortran optimizing compiler.] 1957: IBM's team led by John Bakus created the Fortran Optimized Compiler. Fortran, also translated as Fu Chuan, is a combination of the two words of Formula Translation, meaning "formula translation." It is the world's first high-level programming language that has been officially adopted and passed down to the present day. This language has now evolved into Fortran 2008 and is well known. Six, 1959-61 QR algorithm for calculating matrix eigenvalues [1959–61: JGF Francis of Ferranti Ltd, London, finds a stable method for computing Eigenvalues, known as the QR algorithm.] 1959-61: JGF Francis of London-based Ferenti, found a stable eigenvalue calculation method, which is known as the QR algorithm. This is also an algorithm related to linear algebra. If you have learned linear algebra, you should remember the "eigenvalue of the matrix". The calculated eigenvalue is calculated by matrix. One of the core contents, the traditional solution involves rooting the higher-order equations, which is very difficult when the problem size is large. The QR algorithm decomposes the matrix into an orthogonal matrix (you want to read this article, know what is an orthogonal matrix. :D.) and the product of an upper triangular matrix, Similar to the Krylov method mentioned earlier, this is an iterative algorithm that simplifies the root problem of complex higher-order equations into stages. The substeps of the calculation make it possible to solve large-scale matrix eigenvalues ​​with a computer. The author of this algorithm is JGF Francis from London, England. Seven, 1962 fast sorting algorithm [1962: Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort.] 1962: London, Tony Elliott Brothers Ltd., Hall presented a quick sort. Haha, congratulations, finally saw that it may be your first familiar algorithm~. The fast sort algorithm is a classic algorithm in the sorting algorithm, and it is seen everywhere by the applied shadow. The quick sorting algorithm was originally designed by Sir Tony Hoare. The basic idea is to divide the sequence to be split into two halves. The left half is always "small" and the right half is always "large". This process continues to recursively. Go on until the entire sequence is in order. Speaking of Sir Tony Hoare, the quick sort algorithm is just a small discovery he has inadvertently. His contributions to computers mainly include Formal method theory, and the invention of the ALGOL60 programming language, he also won the 1980 Turing Award for these achievements. The average time complexity of quick sorting is only O(Nlog(N)), which is a historical innovation compared to ordinary sorting and bubble sorting. Eight, 1965 fast Fourier transform [1965: James Cooley of the IBM TJ Watson Research Center and John Tukey of Princeton University and AT&T Bell Laboratories unveil the fast Fourier transform.] 1965: James Cooley of IBM Watson Research, and John Tukey of Princeton University, AT&T Bell Labs jointly launched the Fast Fourier Transform. The fast Fourier algorithm is a fast algorithm of the discrete Fourier algorithm (which is the cornerstone of digital signal processing), and its time complexity is only O. (Nlog(N)); More important than time efficiency is that the Fast Fourier algorithm is very easy to implement in hardware, so it is available in the field of electronics. Extremely wide range of applications. IX, 1977 integer relationship detection algorithm [1977: Helaman Ferguson and Rodney Forcade of Brigham Young University advance an integer Relation detection algorithm.] 1977: Helaman Ferguson and Rodney Forcade of the University of Birmingham propose the integer relationship of the Forcade detection algorithm. Integer relation detection is an ancient problem, and its history can be traced back to the era of Euclid. Specifically: given - group real numbers X1, X2, ..., Xn, whether there are integers a1, a2, ... an which are not all zero, such that: a1 x 1 + a2 x2 + . . . + an x n =0? This year's Helaman Ferguson and Rodney Forcade at Brigham Young University solved the problem. The algorithm is applied to "simplify the calculation of Feynman graphs in quantum field theory". Ten, 1987 fast multipole algorithm [1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole Algorithm.] 1987: Greengard, and Rokhlin of Yale University invented the fast multipole algorithm. This fast multipole algorithm is used to calculate "accurate calculations of N particle motions that interact via gravitational or electrostatic forces - such as stars in the Milky Way, or interactions between atoms in proteins." References: The Best of the 20th Century: Editors Name Top 10 Algorithms. By Barry A. Cipra. Address: http://TopTen/topten.pdf.
Our company specializes in production and R&D of various products including tweeter, alarm loudspeaker, horn loudspeaker, high power Driver Unit , etc. In recent years, our company has developed and manufactured many kinds of products such as air defense warning loudspeaker, flood control and warning loudspeaker, police vehicle-mounted loudspeaker, public address system etc.
Pa Speakers Accessories,Replacement Diaphragm,Diaphragm (acoustics) ,speaker diaphragm,voice coil diaphragm Taixing Minsheng Electronic Co.,Ltd. , https://www.ms-speakers.com