Core Concepts
Agents
…are build around DID - Decentralized Identifier . Users can bring their existing identity or have AD4M create a new one.
Conceptually, AD4M agents are modelled as something that can speak and that can listen.
Agents speak by creating Expressions of AD4M Languages,
whereby these Expression get signed by the agent's DID key.
AD4M agents also have a
publicly shared Perspective, that other agents can see just
by resolving their DID URI.
This Perspective is like the agents semantic web page,
consisting of statements the agent chooses to share with the world.
Statements either about themselves (acting as public profile used by various apps),
or about anything else.
Finally, AD4M agents declare a
direct message Language,
an AD4M Language they choose to be contacted with for receiving messages.
AD4M's built-in Agent-Language resolves DID URIs to AD4M Expressions that look like this:
{
did: "did:key:zQ3shNWd4bg67ktTVg9EMnnrsRjhkH6cRNCjRRxfTaTqBniAf",
perspective: {
links: []
},
directMessageLanguage: "lang://QmZ9Z9Z5yZsegxArToww5zmwtPpojXN6zXJsi7WwMUa8"
}
(see API docs about Agent)
Languages
…encapsulate the actual technology used to communicate, like Holochain or IPFS
and enable Agents to create and share Expressions.
Expressions are referenced via a URI of the kind:
<language>://<language specific expression address>
(with special cases like DID URIs being parsed as such and resolved through the Agent Language).
AD4M resolves these URIs by first looking up the Language via its hash
(and potentially downloading the Language through the built-in Language of Languages)
and then asking the Language about the Expression with given address.
Languages are distributed and interpreted as JavaScript modules.
AD4M passes in proxy-object to the managed Holochain, IPFS, etc. instances so Language
developers can use these technologies without having to set them up or manage themselves.
// Example of a Language that uses the Holochain proxy object
export default async function create(context: LanguageContext): Promise {
const Holochain = context.Holochain as HolochainLanguageDelegate;
await Holochain.registerDNAs([{ file: DNA, nick: DNA_NICK }]);
// ...
async get(expressionAddress: Address): Promise {
const expression = await this.#DNA.call(
DNA_NICK,
"zome_name",
"get_expression_zome_function_name",
expressionAddress
);
}
}
(Read section in docs about how to write AD4M Languages)
Perspectives
…are local and private graph databases. They represent context and association between expressions.
They consist of a list of
RDF/semantic web like triplets (subject-predicate-obejct) called links
because all three items are just Expression URIs pointing to Expressions of arbitrary Languages.
Perspectives are like Solid’s pods, but they are agent-centric:
- Atomic Perspectives belong to and are stored with a single Agent.
- Links inside Perspectives are Link Expressions, so they include their provenance and cryptographic signature
While Expressions are objective (every agent resolving their URI renders the same data),
Perspectives represent subjective associations between objective Expressions.
(See Gettint Started section above for how to deal with Perspectives)