” It has been said that art is a collaboration between God and the artist, which works best when the artist contributes as little as possible; so too with designing distributed systems ” –Unknown
Try to imagine the strategic direction of distributed computing frameworks. Just for a moment transition into a state where you think of nothing and yet listen to everything. What key technologies (disruptive or otherwise) have we experienced over recent years?
Notion of Autonomous Computing
Arguably, one of the most well known papers on the subject of autonomic computing to be recently released is titled “Autonomic Computing: IBM’s Perspective on the State of Information Technology” [URI]; authored by Paul Horn, Senior Vice President of IBM Research. As outlined by Dr. Horn’s manifest, there are many challenges we face over the next decade within Information Technology.
Within the autonomic landscape, each system node must satisfy the following criteria:
- self-identification, self-knowing
- self-recovery (from perturbations)
- self-protection (security)
- self-learning (including from errors)
- self-regulating (to open standards)
Notion of Grid Computing
Wouldn’t it be nice to economically provide a highly available and fault tolerant system that can support an annual uptime guarantee of 99.999 percent—which equates to 5.256 minutes of annual unscheduled downtime—and that scales up by a factor of 106? That is, an application’s storage and processing capacity can automatically grow by a factor of a million, doing jobs faster (106x speed up) or doing 106 larger jobs in the same time (106x scale up), just by adding more resources. Stay focused.
The concept of Grid (or Utility) computing was coined in the mid 1990s and is best defined—to quote The Anatomy of the Grid—the real and specific problem that underlies the Grid concept is coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations.
The Grid Global Forum (GGF) has made great headway; so has OGSA (Open Grid Services Architecture). Message Passing Interface (MPI) is widely accepted. Microsoft entered the HPC game with the Microsoft Compute Cluster Server, which I installed many months ago and is now consuming most of my free time.
Notion of Adaptive Autonomous Agents
An agent is a system that tries to fulfill a set of goals of goals in a complex, dynamic environment. An agent it situated in the environment; it can sense the environment through its sensors and act upon the environment using its actuators. An agent’s goal can take many different forms: they can be “end goals”, or particular states the agent tries to achieve; they can be selective reinforcement or reward that the agent attempts to maximize; they can be internal needs or motivations that the agent has to keep within certain viability zones and so on. An agent is called autonomous if it operates completely autonomously, i.e. if it decides itself how to relate its sensor data to motor commands in such a way that its goals are attended to successfully. An agent it said to be adaptive, if it is able to improve over time, i.e. if the agent becomes better at achieving its goals with experience.
The study of Adaptive Autonomous Agents is grounded in two important insights, which serve as “guiding principles” for most of the current research performed:
- Looking at complete systems changes the problems often in a favorable way.
- Interaction dynamics can lead to emergent complexity.
Essentially, an agent is viewed as a set of competence modules (often called behaviors). These modules are responsible for a particular small task-oriented competence. Each of the modules is directly connected to its relevant sensors and actuators. Modules interface to one another via extremely simple messages rather than a common representation of beliefs, and so on. The communication between modules is almost never of a “broadcast” nature, but happens rather on a point-to-point (or one-to-one) bases. Typically, the messages consist of activation energy, or simple suppression and inhibition signals, or simple tokens in a restricted language. In addition to communication via simple messages, modules also communicate “via the environment”. One module may change some aspect of the environment, which will trigger another module, etc.
Notion of the Semantic Web
Simply put, the Semantic Web is the representation of data on the web in which information is given well-defined meaning—“to be a universal medium for the exchange of data”.
The principal technologies of the Semantic Web fit into a set of layered specifications called the Resource Description Framework (RDF). The current components of that framework are the RDF Core Model, the RDF Vocabulary Description Language and the Web Ontology Language, which all build on the foundation of URIs, XML, and XML namespaces.
The most interesting of these languages is the Web Ontology Language (OWL), which is a descriptive layer built on top of RDF used to model classes, properties, and objects.
Ontology is also a term borrowed from philosophy that refers to the science of describing the kinds of entities in the world and how they are related. Stated another way, an ontology defines the terms used to describe and represent an area of knowledge.
Notion of Service Oriented Architectures (SOA)
I think everyone has been beaten of the head with the “SOA stick”, which is why you won’t feel a thing; keep reading.
In computing, the term Service-Oriented Architecture (SOA) expresses a software architectural concept that defines the use of services to support the requirements of software users. In a SOA environment, nodes on a network make resources available to other participants in the network as independent services that the participants access in a standardized way. Most definitions of SOA identify the use of Web services (i.e. using SOAP or REST) in its implementation. Here and here are two links worth reading. However, one can implement SOA using any service-based technology.
The WS* specifications have gained wide adoption but other advances are needed.
One last point as related to service orientation; I caution everyone to not forget about the elegance in design of class libraries as they are the underpinnings of every SOA. Design for the in-process consumer first with an eye to areas of the API surface that might benefit from hosting within an SOA.
There are many others such as advances in peer-to-peer algorithms, discrete-event simulation and the event horizon, social networking, recovery-oriented systems, storage and machine virtualization, bioinformatics, biotechnology, and quantum computing to name but a few. In my mind, the two most important questions to answer are which of these variables are of significant weighting in the equation of strategic direction and which are “noise”? I certainly have my opinion…but then again, you know what those are like.
Most readers and programmers have little patience to read discussions of this length. However, the length (at least to me) seems disproportionate to the importance of the topic. There is so much more to say and even more to ponder. But, before you run off to save the world let me leave you with one additional thought:
“The map is not the territory.” Alfred Korzybsky Science and Sanity, 1933