The moniker distributed computing has become a broad catchall term that has been applied to everything from parallel computation to the auto configuration of ad-hoc networks. In order explore the technical problems in this field it is necessary to clearly define what exactly the field is, and to this end I want to elaborate on what I believe is and is not distributed computing.
The first use of distributed computing was to describe the problem faced by projects like SETI at home and the genome project. These organizations needed to process huge quantities of data and did not have the budget to do so internally, and instead turned to the greater population. They did not diverge from client-server architecture, but allowed any computer running their client software to request and process chunks of data. The problem that was created was not one of communication, but of trust, since none of the clients could be expected to be either reliable or trustworthy. This problem is typically solved by checking for suspicious responses, and by sending each chunk of data to multiple clients to ensure they agree on an answer.
Five years ago the term grid computing was a new big thing, though it has since been outmoded by the term cloud computing. Both of these models are related to distributed computing, in that they focus on the problem of scalability and reliability of a large number of machines. Both terms imply that the machines are entirely under ones control, with grid computing referring specifically to the use of such clusters internal to an organization and cloud computing referring to the outsourcing of that resource. The problem faced is not as different from the previous as we might initially think. Given the large number of machines and components, failure will occur regularly, and it is important to make sure data and computation are redundant and can hide these individual problems. The problem is lessened only in that nodes are typically not purposefully malicious in this scenario. Cloud computing and grid computing before it both are also systems which have been built and have essentially overcome the issues faced. Many large tech companies run enormous data centers, and consumer services such as Google’s AppEngine and Amazon’s EC2 allow consumers to outsource their computation to the cloud.
Mesh networks are another seemingly disjoint field that has traditionally been linked to distributed computing. Here, a number of computers need to work together to access a common resource, like the Internet, but don’t necessarily trust each other. The typical visualization is that only one computer will have an Internet connection, and that resource needs to be shared to computers that are not directly connected to it. The problem here is now primarily one of trust, since not only is there a likelihood of reliability issues, but each node benefits from ignoring requests from others since that leaves more bandwidth for it. There have been several implementations of mesh computing, although none that can boast tremendous success. The most known is certainly the OLPC, which attempted to use mesh computing as a solution to the intermittent access to resources found in developing countries. Beyond these bespoke solutions, there has been a 802.11s standard for mesh computing developed by the 802.11 working group, which defines a global standard and frequency for wireless communication.
To me distributed computing is both all and none of these problems. Instead I see the fundamental problem to be answered as one of cooperation. Users should be viewed as selfish, in that they want to get the most from a service while giving as little as possible. For proof of this, look at the number of successful free web services versus those that cost money. Free and ad supported continues to rule as a business model because most users are unwilling to pay more than they need to for a service. The challenge then is to get a set of strangers to cooperate so that they all get something more than they started with.
In addition to the previously mentioned fields, this problem is faced by file sharing networks. Here as much as anywhere we see the need for cooperation among selfish individuals. The goal for each user is to get data from others as quickly as possible, while sending as little as possible – both for legal and purely selfish reasons. One attempt to solve this problem have been to develop communities surrounding the technology so that past performance is reflected and good behavior encouraged, but even this is not foolproof.
For me, the goal of distributed computing is to provide a structure where users can gain the ability to burst beyond resources they control, and not worry about peers who are malicious or self-serving. I plan to delve into this problem by looking and experimenting with existing systems, looking specifically at the issue of fault tolerance – which I see as fundamental to any solution, and then building a structure of my own.