It's often good to exploit special properties of the function under consideration.
In this case, if your processor has a fast sqrt, then remove the singularity at x=1 by approximating the following function instead.
Using a Taylor series is almost always wrong. (It's one of those ideas taught in freshman classes because they (that is, the ideas, not the freshmen) are simple, not because they're appropriate when solving applications. Another example is the linked list.) Nevertheless, here is the Taylor series, expanded about the origin, and the error plot.
It's almost always better to expand a Taylor series about the center of the interval, x=0.5 in this case.
The Taylor series is designed to fit the first n derivatives at one point. However, we're interested in approximating a function over some interval. In this case,a Chebyshev approximation is far better than a Taylor approximation, since it is a polynomial approximation that comes closer to minimizing the maximum error over a given interval. (However, contrary to a common belief, it doesn't exactly do this.) Again, use symmetry and make the interval [0,1]. Here is the 6th degree Chebyshev, in both a Chebyshev basis and a power basis.
The Pade approximation is a formal transformation of the Taylor series into a rational expression. Altho it does not add any information, and calculating it does not refer back to the arcsin, yet it is often a better approximation then the Taylor series it was derived from.
Here is the Pade approximation, with 7 d.f., derived from the Taylor series centered about the origin. When counting the number of degrees of freedom, note that one leading coefficient can be normalized to be one (altho Maple doesn't automatically do this).
The Pade approximation centered about x=0.5 is this.
Altho it is accurate over most of the interval, it is useless because of the singularity at x=.9626070222. This happens sometimes with rational approximations.
The Chebyshev-Pade approximation is just the formal transformation of the Chebyshev approximation to a, rational, quotient.
The minimax polynomial approximation is this.
The minimax is always the best polynomial approximation. Perhaps the reason that it is not used more is that it is more difficult to compute.
Finally, rational minimax approximations would be expected to be even better than polynomials, especially when the function has non-polynomiallike things, such as singularities. However, in most cases, Maple and Mathematica fail to compute them, with the error message Error, (in numapprox/remez) error curve fails to oscillate sufficiently; try different degrees. These cases failed here: [3,3], [4,2], [2,4], [1,5]. Here is the [5,1] quotient. Note that the error curve is not equally oscillating. This is probably caused by insufficient precision during the computation. It could be fixed by increasing Digits, but is left here as a warning of what can happen, and of why it's useful to plot the error curves.
Here is the [4,2] quotient.
Finally, here is the [2,4] quotient.
Another test, set Digits:=30, then tried to compute every rational minimax approximation with total degree up to 19. The following cases succeeded; all others failed.
[0,1], [1,0], [0,2], [1,1], [2,0], [0,3], [1,2], [3,0], [1,3], [4,0], [1,4], [5,0], [1,5], [6,0], [n,0] for n=7 to 9. The failures of the hi-degree polynomial minimax approximations may be worth further study.