资源说明:Calculate Pi using distributed computing with JavaScript on Appengine
# Distributed Pi So I've been interested in distributed computing for some time, since 2007, basically around the time I started doing web development. I've always sort of romanticized the notion of distributed computing because of its vast theoretical potential. Projects like [BOINC](http://boinc.berkeley.edu/), [SETI@Home](http://setiathome.berkeley.edu/) and [Folding@Home](http://folding.stanford.edu/English/HomePage) have always given me some kind of idealized notion of "computing for good", inspiring some kind of useful social, scientific or otherwise beneficial change to humanity. But those projects never got the kind of adoption which could truly change the world, together they form networks which are specialized but can, in the end, only eke relatively minute performance. Part of the problem lies in their intrinsic forbidding voluntary nature. It takes too much effort to install and amplifies the problems intrinsic to any "democratic" system: why bother? Voter apathy often stems from a feeling that an individual contribution regresses towards nothing, which is statistically certainly true, but never helpful if it plays a part in the participation of one of these collective entities. In a broader sense, I've always suspected that part of what is necessary for technological progress is the loss of control. Certainly something which appears true from a human experience, that the vast arrays of neurons bound by a cranium are uninviting and wholly unwilling to expose their inner raw computation power to the emergent conscience within them. No doubt the human brain is good at computing, just not (as it appears) math, but intuition (especially of the physical kind) can only arise when a system internalizes some very complex math. We just can't access it, because evolution or some other process has determined that it was something that had to be done away with on the climb to higher cognitive powers. Likewise, I think computers are convergent in that sort of way. There's a great ladder of abstraction which is growing taller and taller, a tower of babel of sorts, and at the end, perhaps we'll find a similar goal, maybe not of god per se, but an equally sought target of consciousness or some other high level intellectual faculties contemporaneously denied to computers. Part of this is losing control, which is inevitable and politically dangerous. Computing terminals are powerful and capable of much more than they're being used for. In fact, most computers spend most time idling, the processors get ever faster not for the handling of the idle time but in order to smooth out the few bursts that actually require fast computation. It's impractical for operating systems to build into them some kind of distributed computing platform, however ideal that would be. It's too contentious to ever get adopted, conceptually a short step away from the ability for a mega-corp to pilfer your precious information as well. But the browser, specifically the web, provides us with an interesting opportunity. Here, we have significantly less fear of personal privacy, since it comes with an expectation of sorts for information sharing. The existence of client side scripting and its prevalence gives an implicit permission to exploit system resources during the site's tenure. Now, of these granted permissions, we have the freedom now to exploit in order to create something truly remarkable. Because it's not so much voluntary on the part of the user, so much as voluntary on the part of the site maintainer who now has the responsibility of allocating and managing the system resources of his visitors, for their brief but additive virtual encounters. The users lose control, and that's a good thing. ## Old Stuff When I first wrote that introduction, I mentioned that I was interested in distributed computing for [quite a while](http://antimatter15.com/wp/2007/10/ajax-distributed-computing/), and that's true. 2007 was, at time of writing five years ago. A bit less than a third of my life, which is fairly significant. It doesn't even quite belong to recent memory, and perhaps has escaped living memory into something quasi-zomboid. Part of the reason why this section is relatively short compared to the other ones is because I really don't have a very good memory of what that was like, and it's rather sad that my previous attempts never had long writeups regarding their potential and process (however, there do tend to be more little updates about the process). Anyway, as it's been such a long time, I'm trying to dig through old stuff to pick out exactly what I did back then. The first things I can find are somewhat easier and simpler problems, finding palindromes, or at least words which spell different words when reversed (I believe this was based on a program I had written a few years prior in Visual Basic). Another was for cracking hashes in a distributed manner. Back then, I think those were one of the first explorations into the concept, of distributed computing in the browser. It was the days before WebWorkers and Cross-Origin Resource Sharing was in its infancy. The pool of possible computation was however, still large but most of it was forbidden. Take note, however that distributed and parallel computing have existed for far longer, perhaps even longer than the idea of a computer. Computers get faster each year, thanks to [http://en.wikipedia.org/wiki/Moore%27s_law](Moore's law) and more and more people get connected to the Internet each year. Perhaps in addition to [Metcalfe's law](http://en.wikipedia.org/wiki/Metcalfe%27s_law) with regard to the value of a telecommunication's network, is the ability to harness idle computation power in order to increase the vale of its participants linearly as the network grows. There are almost a billion people who are connected to the Internet, and however insignificant of a contribution each of those terminals brings to the network would add up into something truly remarkable. However, hash cracking and palindrome finding are fairly trivial. They have little real world practicality and don't seem in any way representative of the greater problems facing humanity. While it's a little impractical to aim for some kind of project which has immediate applications to the value of humanity, it's certainly a worthy target which is worth approximating. They're isolated examples which are easy to parallelize and hardly count as real ventures into the field. Calculating digits of pi is marginally less useless and represents something which is significantly harder and tasks which may be closer to the kinds of computations which are performed in the real world. There have been other projects with similar goals, most notably, [PiHex](http://en.wikipedia.org/wiki/PiHex), which acts as a sort of vindication of this as a possible legitimate attempt. At this point, I have no idea if the algorithm works with digits in the order of trillions, and that's one of the big reasons I'm not actually trying to succeed PiHex. ## Modern Incarnation (circa 2012) Okay, so I decided to dig this up again. Why? Because I really want to play around with server-side JS a little more (just in case getting a node vps has anything to do with merit). The sort of funny thing is that's exactly what it used to be, back in the ancient past, before NodeJS existed (it was before Google Chrome was released, and V8 wasn't open source). I used to have an application which would schedule jobs for computation on the client and provide them in a more useful format. However, I never bothered saving the files outside of the [AppJet](http://en.wikipedia.org/wiki/AppJet) web IDE and the code was lost when the service was discontinued. I tried porting to Google App Engine, but that version was plagued with strange bugs which ended up printing out the wrong digits after a few thousand were computed. Right now, revisiting the idea, I've added a few things which are somewhat more characteristic of the change in web browsers since my first attempt. The old version tried to mock threading by the liberal application of `setTimeout`, which meant that most interface interaction wouldn't be terribly affected. However, it did incur a noticeable slowdown. Now, it uses WebWorkers, which provides numerous interesting possibilities. First and perhaps most important is context isolation and lightweight, asynchronous embedding in multiple domains (origins). Since WebWorkers can work (see what I did there?) across multiple origins and still have access to `XMLHttpRequest`, and the advent of Cross-Origin Resource Sharing (CORS), it's easy to embed this in a page. The low embedding overhead and the true multi-threading abilities bring a lot in the way of making this more than an intriguing concept and into something much more practical. The code, which is now on the [Github repository](https://github.com/antimatter15/distributed-pi) in a new folder named `node` has a very basic interface. Rather than manipulating the widths of divs for progress indicators, now it uses `
本源码包内暂不包含可直接显示的源代码文件,请下载源码包。