Down the Πhole
Although a great deal is known about Π, not much of it is known by me, except that despite the irritating number of times I've been sagely informed that "Π ≡ 22 ÷ 7", it's is quite literally irrational (a mental state I will be compelled towards, if I hear that irritating nonfact just one more time). What that means, is that it can't be represented precisely by any fraction composed from an integral numerator & denominator. Since decimal numbers are just fractions in which the denominator is constrained to a power of ten, they won't suffice either; so it's just as well that the Indiana Pi Bill, didn't make it into law, otherwise people would be sagely informing me it equals 3.2, & I would end up scrawling its interminable digits, using a wax crayon gripped between my toes, on the walls of my cell.
Calculating Π to an arbitrary number of digits may seem like an odd pursuit, & I can't quite recall the reason why I started (perhaps "Police Academy 7" was on TV; which ironically also appears to be an endless series), but because of the huge variety of methods, it has turned into a rich seam to mine, requiring the calculation of squareroots to arbitrary precision, & the factorials of large integers. These formulae represent a small fraction of those available, but are representative of the various approaches to the problem.
Name 
Order of Convergence 
Rate of Convergence  Formula 

RabinowitzWagon  Linear  ~ 10^{3 / 10} 

Gosper  Linear  ~ 10^{1.11} 

Bellard  Linear  2^{10} 

BaileyBorweinPlouffetype  Linear  2^{16} 

Ramanujan  Linear  ~ 10^{7.9} 

Chudnovsky  Linear  ~ 10^{14} 

Borwein  Linear  ~ 10^{50} 

BrentSalamin, (AKA GaussLegendre)  Quadratic 

 Series:

 RabinowitzWagon & Gosper:

These are continued fractions, which have been implemented as spigotalgorithms. This conveniently only involves machinewidth integerarithmetic, whereas the remaining algorithms use unbounded rational arithmetic.
 Bellard & BaileyBorweinPlouffetype (BBP):

Many instances of this type of series, permit determination of specific digits of Pi (when operating in a numeralsystem whose base matches the convergencerate), without determining preceding digits; though that's not the manner in which it's implemented here.
 Ramanujan & Chudnovsky:

These series are a little trickier, since they involve the calculation of the principal squareroot of a constant integer. This doesn't seem onerous, until one realises that this will be required to an equivalent arbitrary precision. They also involve factorials of potentially large integers.
 Borwein:

The series I've used (one discovered in 1993, from many attributed to the same source) is structurally very similar to the Chudnovsky brothers' formula, but with a much faster convergence, yielding fifty digits per term.
 Iterations:

 BrentSalamin:

The series above, have all exhibited linear order of convergence. This means that for each term in the series, the approximation to Π is improved by a constant number of digits. The rate of convergence, (which governs the value of this constant) depends on the specific series. This iteration, in contrast, has a quadratic order of convergence, which means that the number of correct digits in the approximation doubles with each iteration.
Testing
My usual testplatform was used. (Many of the algorithms parallelize effectively, but lacking the hardware for a useful analysis, I've limited testing to just one core).
Results
As noted when finding squareroots by expansion of their Taylorseries, most of these algorithms are sensitive to the speed at which one can sum rational numbers. Whilst summation is normally a relatively inexpensive operation, it comes as a rude awakening that it isn't when operating on rational numbers, since it typically involves folding crossmultiplication over the list. The previously employed performanceenhancing strategy can now be seen to be a ghastly compromise between time & space, when applied to the long sequences of rational numbers (particularly if they haven't been reduced to that representation with minimal numerator & denominator) encountered here. This was countered by splitting the list of rational numbers into chunks, of a length chosen to balance the greed for speed with that for the heap.
I've fitted polynomials to the data which exhibit some clear trend, so that one can estimate their complexityclass. Because of the loglog plot, these polynomials appear as straight lines.
Most algorithms appear to have approximately O(n^{2}) timecomplexity. One may have predicted this for the spigotalgorithms, since they're implemented using a table, both of whose dimensions are proportional to the required precision, & each calculation is just fixedwidth integerarithmetic. Both the Borwein formula & the BrentSalamin iteration look a shade more exponential than polynomial; though clarity is sadly lacking, & it may equally be that their gargantuan memoryrequirement influences computationtime more than the number of steps required.
Because the quadratic order of convergence of the BrentSalamin iteration, results in the precision doubling with each iteration, a staggering of the results is apparent; both the executiontime & memoryrequirement pulse above the underlying trend, between those precisionrequirements which exactly match the calculated precision, & where in consequence, no excess precision has been calculated. This unwelcome behaviour can be also observed to a lesser extent in other algorithms.
The two spigotalgorithms, performed worst over the precisionrange tested, though it looks likely that both the Borwein formula & BrentSalamin iteration, will result in worse performance at slightly higher precisions. The remaining algorithms performed broadly similarly, but with the "base65536"version of the BaileyBorweinPlouffetype formulae, marginally the fastest.
The memoryrequirements of all algorithms have approximately O(n) spacecomplexity, but the Borwein formula & BrentSalamin iteration stoodout again, being closer to O(n^{3}), rapidly limiting my ability to test further on the available hardware.
Conclusions
None of these implementations are competitive with recordbreaking implementations; nobody raises even one eyebrow until the first trillion digits have whistled by. Broadly speaking, one might expect an application coded in Haskell, to run at only half the speed of a similar algorithm coded in C++; see the Codeshootout. In addition, I have to admit the possibility that my implementation stinks, but if this is so, then all my implementations are similarly blighted. This may be so (since it's regrettably easy, when using rational arithmetic to accidentally use a fraction with a huge numerator & denominator, where a much simpler fractionalrepresentation, with adequate precision, is also available), but I've found no smoking gun yet.
Despite the poor overall absolute performance of all the implementations, the relatively poor performance of the BrentSalamin iteration, was a surprise. Its quadratic order of convergence suggested superiority over the other algorithms, but regrettably each iteration demands vastly more resources than the previous.