diff --git a/docs/composedb/core-concepts.mdx b/docs/composedb/core-concepts.mdx
deleted file mode 100644
index 04065e9b..00000000
--- a/docs/composedb/core-concepts.mdx
+++ /dev/null
@@ -1,203 +0,0 @@
-# ComposeDB Concepts
-Learn about the ComposeDB graph database protocol and technology stack.
-
-## Graph Database Protocol
----
-In this section we will describe key aspects of the ComposeDB graph database protocol.
-
-### Graph
-ComposeDB is a composable property graph built on [Ceramic](https://ceramic.network/), where:
-
-- **Nodes** are [accounts](#accounts) or [documents](#documents), each possessing a globally unique ID
-- **Edges** are queryable [relationships](#account-to-model-relations)
-
-### Accounts
-
-Accounts are entities capable of owning and performing mutations on documents in the ComposeDB graph. Accounts usually represent end users, but they can be anything capable of signing a message such as groups, apps, devices, or services. Accounts perform ComposeDB mutations by submitting signed (authenticated) events to Ceramic.
-
-ComposeDB is built on Ceramic, so it relies on Ceramic's identity system for accounts and authentication. Ceramic implements the [Decentralized Identifiers (DIDs)](https://w3c.github.io/did-core/) protocol, a widely-adopted W3C standard for accounts.
-
-An example public DID identifier:
-
-```
-did:pkh:bafy123...56789
-```
-#### Authentication
-
-ComposeDB goes beyond vanilla DIDs and provides a great UX with additional developer tooling. ComposeDB is compatible with the "Sign In with X" standard (e.g. Sign in with Ethereum, SIWE) and the DID Sessions library, which enables end users to initiate long-lived [sessions](./guides/composedb-client/user-sessions.mdx) from their existing blockchain wallet such as MetaMask or Phantom with only one signature, making Web3 authentication feel like Web2.
-
-### Documents
-
-A document is a single mutable instance of structured data in the ComposeDB graph. All documents must conform to a [model](#models) and be authored by an account. Updates to a document must also adhere to its model and can only be performed by its owner's account. ComposeDB APIs abstract away that documents are actually stored in Ceramic [streams](../protocol/js-ceramic/streams/streams-index).
-
-### Models
-Models contain metadata and GraphQL schemas for documents. All documents must be based on a model. Models are designed to be plug-and-play so they can easily be reused by ComposeDB application developers, removing the burden of data modeling. When multiple applications reuse the same model, they also share access to the same documents (data set) on the ComposeDB graph, enabling data composability and reuse. Like documents, models are stored in Ceramic streams; however unlike documents, models are immutable.
-
-#### Modeling Basics
-
-Models are stored in a `.graphql` file and written using a subset of the [GraphQL Schema Definition Language (SDL)](https://composedb.js.org/docs/0.5.x/api/sdl/scalars). Within a model, it is possible to define specific properties that store relations to other models or accounts. When using the model, it's possible to perform queries based on these relations. See [Introduction to Modeling](./guides/data-modeling/introduction-to-modeling.mdx) to learn the basics of writing models.
-
-An example `Post` model, whose documents would store social posts:
-
-```graphql
-
-type Posts @createModel(accountRelation: LIST, description: "A simple Post") {
- body: String! @string(minLength: 1, maxLength: 100)
- edited: DateTime
- created: DateTime!
- profileId: StreamID! @documentReference(model:"BasicProfile")
- profile: BasicProfile! @relationDocument(property: "profileId")
-}
-
-```
-- `type` sets the name for the model, in this case `Post`
-- `@createModel` is a directive to create a new model that takes `accountRelation` and `description` as parameters
-- `accountRelation` sets max documents for each account, where `SINGLE` is one and `LIST` is unlimited
-- `description` sets the description for the model
-
-
-#### Account to Model Relations
-
-Any document can always be queried by its author's account using the required `accountRelation` property. See [Account to Model Relations](./guides/data-modeling/relations.mdx#account-to-model) for more.
-
-##### Model
-
-Here is a model that stores a `DisplayName` for a given user:
-
-```graphql
-type DisplayName @createModel(accountRelation: SINGLE, description: "Display name for a user") {
- displayName: String! @string(minLength: 3, maxLength: 50)
-}
-```
-
-
-
-#### Model to Account Relations
-
-Enable a document to be queried by a referenced account using the `@accountReference` directive. See [Model to Account Relations](./guides/data-modeling/relations.mdx#model-to-account) for more.
-
-##### Model
-
-Here is a model, `Message`, that stores a direct message (DM) sent from one user to another:
-
-```graphql
-type Message @createModel(accountRelation: LIST, description: "Direct message model") {
- recipient: DID! @accountReference
- directMessage: String! @string(minLength: 1, maxLength: 200)
-}
-```
-
-
-
-#### Model to Model Relations
-
-Enable a document to be queried by its relationship to other documents using the `@documentReference` and `@relationFrom` directives. See [Model to Model Relations](./guides/data-modeling/relations.mdx#model-to-model) for more.
-
-##### Model
-
-Here are the models that enable comments to be made on a post. It supports unlimited comments per user, and bi-directional queries from any comment to the original post and from the original post to all of its comments.
-
-```graphql
-# Load post model (using streamID)
-
-type Post @loadModel(id: "kjzl6hvfrbw6c99mdfpjx1z3fue7sesgua6gsl1vu97229lq56344zu9bawnf96"){
- id: ID!
-}
-
-# New comment model
-# Set relationship to original post
-# Enable querying comment to get original post
-
-type Comment @createModel(accountRelation: LIST, description: "A comment on a Post") {
- postID: StreamID! @documentReference(model: "Post")
- post: Post! @relationDocument(property: "postID")
- text: String! @string(maxLength: 500)
-}
-
-# Load comment model
-
-type Comment @loadModel(id: "kjzl6hvfrbw6c9oo2ync09y6z5c9mas9u49lfzcowepuzxmcn3pzztvzd0c7gh0") {
- id: ID!
-}
-
-# Load post model
-# Extend post model with comments
-# Set relationships to all comments
-# Enable querying post to get all comments
-
-type Post @loadModel(id: "kjzl6hvfrbw6c99mdfpjx1z3fue7sesgua6gsl1vu97229lq56344zu9bawnf96") {
- comments: [Comment] @relationFrom(model: "Comment", property: "postID")
-}
-```
-
-### Composites
-
-A composite is a group of one or more models (e.g. profiles, blog posts, comments) that defines the complete graph database schema for an application. To be usable in your application, one or models need to be bundled into a composite. Composites have three representations used throughout the ComposeDB stack:
-
-| Representation | Usage |
-|---|---|
-|__Composite__| The base composite containing a collection of models encoded in JSON |
-|__Deployed Composite__| Once deployed, instructs a node which documents to index based on the composite's models |
-|__Compiled Composite__| Once compiled, enables client to query and mutate documents based on the composite's models |
-
-## Core Components
----
-
-Learn about the software components that power ComposeDB technology.
-
-### ComposeDB Server
-
-As mentioned earlier, ComposeDB is a decentralized property graph database built on top of [Ceramic](https://ceramic.network). A ComposeDB server is actually just a Ceramic node backed by a SQL database which stores an index of ComposeDB documents based on the models contained in a composite. The index database provides fast access and high-performance queries against documents in the ComposeDB graph without suffering from performance limitations of decentralization.
-
-Although each ComposeDB server decides which documents it wants to index, all ComposeDB servers are networked and replicate data across the Ceramic network which acts as a global syncing protocol. Your local database state is built up from a global network of cryptographically-verifiable documents and models, allowing you to trust the integrity of your index.
-
-Today, all ComposeDB developers need to run their own server to ensure data availability. However various hosted node providers are emerging in the ecosystem to provide this functionality as a service. Down the road, Ceramic plans to implement crytoeconomic guarantees around data availability.
-
-Here's an overview of services running in a ComposeDB server:
-
-| Service | Description |
-|---|---|
-|__Database__| SQL database used to store an index of ComposeDB documents |
-|__Ceramic__| Decentralized event streaming infrastructure used to store ComposeDB models and documents |
-|__IPFS__| Low-level peer-to-peer data protocols used by Ceramic |
-
-### ComposeDB Client
-
-[ComposeDB client](./guides/composedb-client/composedb-client.mdx) is a relatively simple software library that connects your application to a ComposeDB server. It is written in JS/TS and exposes a GraphQL interface that enables your application to perform GraphQL queries and mutations against a ComposeDB server. The client needs to be passed a compiled composite in order to saturate its own APIs and understand the schemas for the models you’re using.
-
-### Model Catalog
-
-As mentioned earlier, composites and their underlying data models are designed to be reusable, making it simple to build complementary and interoperable apps. Apps that reuse each other's models create instant data interoperability, without any additional integrations needed.
-
-The [Model Catalog](./guides/data-modeling/model-catalog.mdx) allows developers to discover, share and reuse data models, enabling data composability across applications within the ComposeDB ecosystem. All models contained in deployed composites are automatically added to the catalog.
-
-#### Catalog Interfaces
-Currently, discovering models in the catalog happens through commands in the ComposeDB CLI. However, we're looking for people in the community to create great products and user interfaces for interacting with the catalog.
-
-
-
-## Next Steps
----
-Ready to dive deeper? Head to [**Next Steps →**](./next-steps.mdx)
diff --git a/docs/composedb/create-ceramic-app.mdx b/docs/composedb/create-ceramic-app.mdx
deleted file mode 100644
index 83dccbfb..00000000
--- a/docs/composedb/create-ceramic-app.mdx
+++ /dev/null
@@ -1,92 +0,0 @@
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-
-# Scaffold a new Ceramic app
-
-Get up and running quickly with a basic ComposeDB application with one command.
-
-**Prerequisites**
-
-- Operating system: **Linux, Mac, or Windows** (only [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install))
-- **Node.js v20** - If you are using a different version, please use `nvm` to install Node.js v20 for best results.
-- **npm v10** - Installed automatically with NodeJS v20
-
-You will also need to run a ceramic-one node in the background which provides Ceramic
-data network access. To set it up, follow the steps below:
-
-:::note
-The instructions below cover the steps for the MacOS-based systems. If you are running on a Linux-based system, you can find the
-instructions [here](https://github.com/ceramicnetwork/rust-ceramic?tab=readme-ov-file#linux---debian-based-distributions).
-:::
-
-1. Install the component using [Homebrew](https://brew.sh/):
-
-```bash
-brew install ceramicnetwork/tap/ceramic-one
-```
-
-2. Start the `ceramic-one` using the following command:
-```bash
-ceramic-one daemon --network in-memory
-```
-
-:::note
-By default, the command above will spin off a node which connects to a `in-memory`. You can change this behaviour by providing a `--network` flag and specifying a network of your choice. For example:
-
-```ceramic-one daemon --network testnet-clay```
-:::
-
----
-
-## Start the ComposeDB example app
-
-You can easily create a simple ComposeDB starter project by using our CLI and running the following command:
-
-
-
-
-```powershell
-npx create-ceramic-app
-```
-
-
-
-
-```powershell
-pnpx create-ceramic-app
-```
-
-
-
-
-:::tip
-You need at least yarn 2.x to use the `yarn dlx` command. If you have an older version, upgrade it by running `yarn set version stable` and `yarn install`.
-
-Then you can run the following command to create a new Ceramic app using yarn 2.x
-:::
-
-```powershell
-yarn dlx create-ceramic-app
-```
-
-
-
-
-```powershell
-bunx create-ceramic-app
-```
-
-
-
-
-This command will create a new directory with a basic ComposeDB application (a social app). It will clone the app from [repository](https://github.com/ceramicstudio/ComposeDbExampleApp.git), install all dependencies, launch a local Ceramic node, a local GraphQL server and start the app.
-
-Once you have an opportunity to play with an example app and see how it works, you can start building your own app. For that, you will probably want to have more control over your environment and the code. You can find more information on how to set up your environment in the [Set up your environment](./set-up-your-environment) section.
diff --git a/docs/composedb/create-your-composite.mdx b/docs/composedb/create-your-composite.mdx
deleted file mode 100644
index 9f96fcfa..00000000
--- a/docs/composedb/create-your-composite.mdx
+++ /dev/null
@@ -1,131 +0,0 @@
-# Create your composite
-
-Create your composite to serve as your graph database schema. In this guide, we will create your first composite.
-
-:::tip
-
-Before continuing, you must have [set up your environment](./set-up-your-environment.mdx) in the previous step
-
-:::
-
-## Overview
----
-
-A composite is your database schema for ComposeDB, which includes a collection of data models. Once created, your composite instructs your node which models to index and also allows your client to perform queries and mutations on these models.
-
-## Data Model Catalog
----
-
-The [Model Catalog](./guides/data-modeling/model-catalog.mdx) contains all models created by other ComposeDB developers. By creating or reusing models within the model catalog in your composite, you can instantly share and sync data with other applications. This brings native app data composability to Web3 -- no more API integrations.
-
-### List all models
-To list all models in the model catalog, run the following command:
-
-```bash
-composedb model:list --table
-```
-
-Here, the flag `--table` will display the output in an organized table view and provide more details about each model’s functionality. By default, this command lists models in production on mainnet. To see models being developed on clay testnet, specify `--network=testnet-clay`:
-
-```bash
-composedb model:list --network=testnet-clay --table
-```
-
-
-
-Notice each model has the following properties:
-
-- `Name` - model name
-- `Unique ID` - unique identifier (stream ID) for the model
-- `Description` - description of the model’s functionality
-
-## Creating the composite
----
-
-In this section we will show how to create a composite by downloading models from the model catalog.
-
-### Using a single model
-
-You can fetch any existing model from the catalog by referencing the model’s unique ID. For example, for your basic social media app, use the existing model `SimpleProfile`. To fetch the model, to your working directory, take note of the model stream ID in the table above and run the following command:
-
-```bash
-composedb composite:from-model kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65 --ceramic-url=http://localhost:7007 --output=my-first-composite.json
-```
-
-You should see the following output in your terminal:
-
-```bash
-✔ Creating a composite from models... Composite was created and its encoded representation was saved in my-first-composite.json
-```
-
-This output means that you now have the `SimpleProfile` model stored locally in a file called `my-first-composite.json`.
-
-### Using multiple models
-
-If your application needs multiple models, for example the `SimpleProfile` and `Post` models, you can. To fetch them, take note of the model stream IDs and provide them in a ComposeDB CLI command as follows:
-
-```bash
-composedb composite:from-model kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65 kjzl6hvfrbw6c822s0cj1ug59spj648ml8a6mbqaz91wx8zx3mlwi76tfh3u1dy --ceramic-url=http://localhost:7007 --output=my-first-composite.json
-```
-
-The output of this command will be a composite file named `my-first-composite.json`.
-
-## Using the composite
----
-### Deploying the composite
-
-You will have to deploy the composite with fetched models to your local Ceramic node so that they can be used when building and running your applications. This can be achieved by using ComposeDB CLI and referencing the composite file of fetched models in your local environment as shown below. Note that you have to provide [your did private key](./set-up-your-environment#generate-your-private-key) to deploy the model:
-
-```bash
-composedb composite:deploy my-first-composite.json --ceramic-url=http://localhost:7007 --did-private-key=your-private-key
-```
-
-You should see the output similar to the one below:
-
-```bash
-ℹ Using DID did:key:z6MkoDgemAx51v8w692aZRLPdwP6UPKj3EgUhBTvbL7hCwLu
-✔ Deploying the composite... Done!
-["kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65"]
-```
-
-Whenever composites are deployed, the models will be automatically indexed. This also means that these models are shared across the network (at the moment, only Clay testnet). If you check the output produced by the terminal that runs your Ceramic local node, you should see a similar output:
-
-```bash
-IMPORTANT: Starting indexing for Model kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65
-IMPORTANT: Starting indexing for Model kjzl6hvfrbw6c822s0cj1ug59spj648ml8a6mbqaz91wx8zx3mlwi76tfh3u1dy
-IMPORTANT: Creating ComposeDB Indexing table for model: kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65
-IMPORTANT: Creating ComposeDB Indexing table for model: kjzl6hvfrbw6c822s0cj1ug59spj648ml8a6mbqaz91wx8zx3mlwi76tfh3u1dy
-```
-
-This means that the composite was deployed and the models were indexed on your local node successfully! 🎉
-
-### Compiling the composite
-
-The last step left is compiling the composite. This is necessary to interact with the data in the next step of this guide:
-
-```bash
-composedb composite:compile my-first-composite.json runtime-composite.json
-```
-
-You should see the following output in your terminal:
-
-```bash
-✔ Compiling the composite... Done!
-runtime-composite.json
-```
-
-The output of this command will be a json file called `runtime-composite.json`
-
-## Next Steps
----
-Now that you have created your composite, you are ready to use it: **[Interact with data →](./interact-with-data.mdx)**
-
-## Related Guides
-
-- [Intro to Modeling](./guides/data-modeling/data-modeling.mdx)
-
-- [Model Catalog](./guides/data-modeling/model-catalog.mdx)
-
-- [Writing Models](./guides/data-modeling/writing-models.mdx)
-
-- [Composites](./guides/data-modeling/composites.mdx)
diff --git a/docs/composedb/examples/index.mdx b/docs/composedb/examples/index.mdx
deleted file mode 100644
index 870b3bef..00000000
--- a/docs/composedb/examples/index.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-# Tutorials and Examples
-
-If you have built an example app from the [**Set up your environment**](./set-up-your-environment) section and now you're looking for more, check out the extensive list of [**Starter Applications and Tutorials**](./examples/tutorials-and-examples) or go deep with the [**Verifiable Credentials**](./examples/verifiable-credentials) guide.
diff --git a/docs/composedb/examples/taco-access-control.mdx b/docs/composedb/examples/taco-access-control.mdx
deleted file mode 100644
index 3504533f..00000000
--- a/docs/composedb/examples/taco-access-control.mdx
+++ /dev/null
@@ -1,117 +0,0 @@
-# TACo with ComposeDB
-
-*Store sensitive data on ComposeDB, using decentralized access control to enforce fine-grained decryption rights.*
-
-This guide explains how to integrate [TACo](https://docs.threshold.network/applications/threshold-access-control) into ComposeDB, which enables the storing and sharing of non-public data on Ceramic. A more detailed version of this tutorial is available [here](https://docs.threshold.network/app-development/threshold-access-control-tac/integration-guides/ceramic-+-taco).
-
-## TACo Overview
-
-TACo is a programmable encrypt/decrypt API for applications that handle sensitive user data, without compromising on privacy, security or decentralization. TACo offers a distinct alternative to centralized, permissioned, and TEE-dependent access control services.
-
-TACo is the first and only end-to-end encrypted data sharing layer in which access to data payloads is always collectively enforced by a distributed group. Today, over 120 service-providers permissionlessly run TACo clients. They independently validate whether a given data request satisfies pre-specified conditions, only then provisioning decryption material fragments for client-side assembly, decryption, and plaintext access.
-
-TACo offers a flexible access control framework and language, in which access conditions can be configured individually and combined logically. Developers can compose dynamic access workflows for their users – for example, using
-the sequential conditions feature to predicate the input to a given access condition on the output of a previous condition or call. Conditions may also be programmatically combined with both on-chain and off-chain authentication methods.
-
-TACo’s encrypt/decrypt API – [taco-web](https://github.com/nucypher/taco-web) – is straightforward to integrate into any web app and usable in parallel with core Web3 infrastructure like Ceramic.
-
-### Use Cases
-
-- **Social networks & Knowledge Bases:** Leverage Ceramic's verifiable credentials and TACo's credential-based decryption to ensure that private user-generated content is only viewable by those who are supposed to see it, and nobody else.
-
-- **IoT event streams:** Let data flow from sensors to legitimate recipients, without trusting an intermediary server to handle the routing and harvest sensitive (meta)data. For example, a medical professional can be issued a temporary access token if the output data from a patient's wearable device rises above a certain threshold.
-
-- **LLM chatbots:** Messages to and from a chatbot should be 100% private, not mined by a UX-providing intermediary. Harness Ceramic's web-scale transaction processing and TACo's per-message encryption/condition granularity to provide a smooth and safe experience for users of LLM interfaces.
-
-## Example Application & Repo
-
-The "TACo with ComposeDB Message Board [Application](https://github.com/nucypher/taco-composedb/tree/main)" is provided as an example and reference for developers – illustrating how TACo and ComposeDB can be combined in a browser-based messaging app. Once installed, a simple UI shows how messages can be encrypted by data producers with access conditions embedded, and how data consumers can view messages *only* if they satisfy those conditions. Launching the demo also involves running a local Ceramic node, to which TACo-encrypted messages are saved and immediately queryable by data requestors.
-
-The following sections explain the core components of TACo’s access control system – access conditions, encryption, and decryption.
-
-### Specifying access conditions & authentication methods
-
-There are two ways in which a recipient, or data consumer, must prove their right to access the private data – (1) authentication and (2) condition fulfillment. The data producer must specify the authentication methods and condition(s) before encrypting the private data, as this configuration is embedded alongside the encrypted payload.
-
-In the example snippet below, we are using RPC conditions. The function will check the *data consumer’s* Ethereum wallet balance, which they prove ownership of via the chosen authentication method – in this case via a EIP4361 (Sign-In with Ethereum) message. Note that this message has already been solicited and utilized by the application, analogous to single-sign-on functionality. This setup is the same as in the demo code above and can be viewed directly in the [repo](https://github.com/nucypher/taco-composedb/blob/main/src/fragments/chatinputbox.tsx#L26-L34).
-
-```TypeScript
-import { conditions } from "@nucypher/taco";
-
-const rpcCondition = new conditions.base.rpc.RpcCondition({
- chain: 80002,
- method: 'eth_getBalance',
- parameters: [':userAddressExternalEIP4361'],
- returnValueTest: {
- comparator: '>',
- value: 0,
- },
-});
-```
-
-### Encrypting & saving the data
-
-To complete the encryption step, the following are added as arguments:
-a. `domain` – testnet or mainnet
-b. `ritualId` – the ID of the cohort of TACo nodes who will collectively manage access to the data
-c. a standard web3 provider
-
-The output of this function is a payload containing both the encrypted data and embedded metadata necessary for a qualifying data consumer to access the plaintext message.
-
-```TypeScript
-import { initialize, encrypt, conditions, domains, toHexString } from '@nucypher/taco';
-import { ethers } from "ethers";
-
-await initialize();
-
-const web3Provider = new ethers.providers.Web3Provider(window.ethereum);
-const ritualId = 0
-const message = "I cannot trust a centralized access control layer with this message.";
-const messageKit = await encrypt(
- web3Provider,
- domains.TESTNET,
- Message,
- rpcCondition,
- ritualId,
- web3Provider.getSigner()
-);
-const encryptedMessageHex = toHexString(messageKit.toBytes());
-```
-
-### Querying & decrypting the data
-
-Data consumers interact with the TACo API via the `decrypt` function. They include the following arguments:
-
-a. `provider`
-b. `domain`
-c. `encryptedMessage`
-d. `conditionContext`
-
-`conditionContext` is a way for developers to programmatically map methods for authenticating a data consumer to specific access conditions – all executable at decryption time. For example, if the condition involves proving ownership of a social account, authenticate via OAuth.
-
-```TypeScript
-import {conditions, decrypt, Domain, encrypt, ThresholdMessageKit} from '@nucypher/taco';
-import {ethers} from "ethers";
-
-export async function decryptWithTACo(
- encryptedMessage: ThresholdMessageKit,
- domain: Domain,
- conditionContext?: conditions.context.ConditionContext
-): Promise {
- const provider = new ethers.providers.Web3Provider(window.ethereum);
- return await decrypt(
- provider,
- domain,
- encryptedMessage,
- conditionContext,
- )
-}
-```
-
-Note that the EIP4361 authentication data required to validate the user address (within the condition) is supplied via the `conditionContext` object. To understand this component better, check out the demo [repo](https://github.com/nucypher/taco-composedb/blob/main/src/fragments/chatcontent.tsx#L47).
-
-### Using ComposeDB & TACo in production
-
-For Ceramic, connect to Mainnet (`domains.MAINNET`).
-
-For TACo, a funded Mainnet ritualID is required – this connects the encrypt/decrypt API to a cohort of independently operated nodes and corresponds to a DKG public key generated by independent parties. A dedicated ritualID for Ceramic + TACo projects will be sponsored soon. Watch for updates here.
diff --git a/docs/composedb/examples/tutorials-and-examples.mdx b/docs/composedb/examples/tutorials-and-examples.mdx
deleted file mode 100644
index 930b5e80..00000000
--- a/docs/composedb/examples/tutorials-and-examples.mdx
+++ /dev/null
@@ -1,34 +0,0 @@
-# Starter Applications and Tutorials
-
-Looking for code samples, starter applications, and tutorials to kickstart your development process? Check out some of the examples below for inspiration.
-
-## Starter Applications
-
-- [**Official ComposeDB Example App**](https://github.com/ceramicstudio/ComposeDbExampleApp) - A starter application built around a social media platform use-case. A great first step if you haven't done anything with ComposeDB yet.
-- [**Lit Protocol with ComposeDB**](https://github.com/ceramicstudio/lit-composedb) - Encrypt and decrypt data based on on-chain condition logic using Lit Protocol while storing on ComposeDB
-- [**Ethereum Attestation Service on ComposeDB**](https://github.com/ceramicstudio/ceramic-eas) - Save attestations generated using the Ethereum Attestation Service to the Ceramic Network using ComposeDB.
-- [**OpenAI Realtime Chat with ComposeDB**](https://github.com/ceramicstudio/ceramic-ai) - Interact with an OpenAI API endpoint in the form of a realtime chat application with storage on ComposeDB.
-
-## Tutorials
-
-- [**Getting Started with ComposeDB (video)**](https://www.youtube.com/watch?v=r68FXBTCBZ4) - Follow an instructional video on getting started and set up on ComposeDB.
-- [**Verifiable Credentials**](./verifiable-credentials.mdx) - Learn how to create and verify verifiable credentials on ComposeDB.
-- [**Lit Protocol with ComposeDB**](https://developer.litprotocol.com/v3/integrations/storage/ceramic-example) - A tutorial that walks the reader through how the `Lit Protocol with ComposeDB` repository (linked above) works under-the-hood.
-- [**Decentralized Databases: ComposeDB**](https://dev.to/fllstck/decentralized-databases-composedb-49m3) - A tutorial that walks the reader through how to set up a decentralized blog on ComposeDB.
-- [**Build an AI Chatbot on ComposeDB**](https://learnweb3.io/lessons/build-an-ai-chatbot-on-compose-db-and-the-ceramic-network) - Read a LearnWeb3.io tutorial on how the `OpenAI Realtime Chat with ComposeDB` repository (linked above) works and how to set it up.
-- [**Query Filtering and Ordering in ComposeDB**](https://blog.ceramic.network/tutorial-query-filtering-and-ordering-in-composedb/) - Learn the ins and outs of filtering and ordering based on schema subfields in this tutorial blog post.
-- [**Creating `MetIRL` Attestations with EAS**](https://docs.attest.sh/docs/tutorials/ceramic-storage) - Learn how the `Ethereum Attestation Service on ComposeDB` repository (linked above) works, how to generate attestations, and how to create confirmations.
-- [**Encrypted Data on ComposeDB**](https://blog.ceramic.network/tutorial-encrypted-data-on-composedb/) - Learn yet another way to encrypt and decrypt data on ComposeDB by generating an `encryptionDid` instance.
-- [**Verax Attestations with Ceramic Storage**](https://docs.ver.ax/verax-documentation/developer-guides/tutorials/using-ceramic-to-store-the-attestation-payload) - Learn how Ceramic can be used together with Verax on-chain attestations as an efficient storage mechanism for off-chain metadata.
-- [**Mastering SET Relations and Immutable Fields (video)**](https://www.youtube.com/watch?v=T2BRQqPI354) - An instructional video walk-through of how to use SET account relations and immutable field features in ComposeDB.
-
-## Experiments & SDKs
-
-- [**Web3 Points Library**](https://github.com/ceramicstudio/solutions-sdk/tree/main/libraries/points) - An experimental use case-based library designed to support developers looking to use Ceramic as the basis for their rewards, incentives, and point systems.
-- [**Web3 Points Demo Application**](https://github.com/ceramicstudio/points-example) - A simple full-stack demonstration of how to use the Web3 Points Library to reward users for joining a community's platform presence (in this case, a Discord server). Also contains an extension using Gitcoin Passport located in the `with-gitcoin` branch.
-- [**Web3 Points Library Example App Tutorial**](https://blog.ceramic.network/web3-points-library-tutorial/) - A walk-through of the example app demo application (linked above).
-- [**Web3 Points Example App Tutorial - YouTube Version**](https://www.youtube.com/watch?v=75IZp2oYncM) - A video walk-through version of the tutorial mentioned above.
-
-### **Want your open-source examples featured here?**
-
-Get in touch with us on the [Ceramic Discord](https://chat.ceramic.network) - we'd love to see what you're building!
diff --git a/docs/composedb/examples/verifiable-credentials.mdx b/docs/composedb/examples/verifiable-credentials.mdx
deleted file mode 100644
index e75c5fd3..00000000
--- a/docs/composedb/examples/verifiable-credentials.mdx
+++ /dev/null
@@ -1,652 +0,0 @@
-# Verifiable Credentials
-
-Verifiable Credentials are a W3C standard used by development teams to fulfill a need for tamper-evident claims that can be cryptographically proven to reliably reveal who issued the claim, who (if anyone) is the recipient, the content of the claim itself, and more.
-
-While there are multiple libraries and data model implementations available to developers who want to generate and validate Verifiable Credentials for their applications, this guide outlines one example to help illustrate these general concepts.
-
-## But First - What are Verifiable Credentials?
-
-[Verifiable Credentials](https://www.w3.org/TR/vc-data-model/) offer a digital credentialing format that follows specific [World Wide Web Consortium](https://www.w3.org/) open standards. This format relies on several key characteristics that help ensure its reliability as a tamper-proof credential such as including a digital signature that cannot be inauthentically synthesized, the use of Decentralized Identifiers (or DIDs) to represent unique individual identities, and a predictable core data model that allows credential instances to be reconstructed as presentations.
-
-Generally speaking, these components break down into three primary categories that make up a Verifiable Credential:
-
-- **Metadata**: This data is often cryptographically signed by the issuer and contains information about the credential itself (such as who the issuer is, when it expires, and so on) in addition to the credential identifier
-- **Claim**: A set of claims about the credential subject (these must be tamper-proof)
-- **Proof**: A set of properties that allows people to cryptographically verify the source of the data and whether the data has been tampered with
-
-A common use case to point to would be an online education platform that issues credentials when students have completed courses. A Verifiable Credential could be used to reliably show that an issuer (the education platform) attests that a given student (the eventual credential holder) has completed a specific course (included in the metadata). If a graduate student program needed to verify a given credential to accept students, those programs (playing the role of a verifier) would request proofs from the holder and be able to know whether those proofs were valid, invalid, or tampered with.
-
-### Where do Verifiable Credentials and Decentralized Storage Converge?
-
-Given the data model Verifiable Credentials use, individual credential instances can be stored anywhere - from offline storage on a hard drive, to traditional databases controlled by companies who rely on Verifiable Credentials, to smart contracts on a blockchain, to more performant peer-to-peer storage, and everything in between. Why, then, would developers choose to store verifiable credentials on Ceramic?
-
-**Data Interoperability**
-
-Since Verifiable Credentials are flexible enough to describe a seemingly limitless set of circumstances, yet standardized enough to be able to easily verify the proofs included therein, developers who build on Ceramic not only benefit from the performance and querying capabilities offered by ComposeDB but can also consume verifiable credentials from other issuers and communities built on Ceramic.
-
-**Self-Sovereign Identity (SSI)**
-
-Verifiable Credentials contribute to self-sovereign identity in a major way by allowing users to easily prove their identity and share credentials without delegating factors of their identity to some central authority. Developers who allow their users to store their credentials on Ceramic enable their users to retain control and ownership of their data. Since streams in the Ceramic Network can only be modified with the permission of the controller, each credential saved to Ceramic can only be edited or changed in the future by the users themselves.
-
-**Ease-Of-Querying**
-
-While storing raw JSON files to other decentralized storage options like IPFS is always an option, developers who require any layer of storage at scale with the ability to filter, sort, and query data based on more precise qualities will need a storage option that provides functional similarities to a traditional database. This is where ComposeDB comes into play, offering a familiar database experience with native support for GraphQL and automatic performance gains by splitting read/write operations.
-
-While the list goes on, let's hop into the guide.
-
-## Getting Started - Key Components
-
-To support the functionality we need for our application that uses Verifiable Credentials, we will rely on the following tools:
-
-**Verifiable Data Registry**
-
-One of the key components outlined in the [Verifiable Credential Ecosystem](https://www.w3.org/TR/vc-data-model-2.0/#dfn-verifiable-data-registries) is a Verifiable Data Registry. This component is responsible for maintaining things like Verifiable Credential schemas. To fulfill this function, we will be using [Serto](https://schemas.serto.id/), a shared repository of schemas.
-
-**Verifiable Credential Library**
-
-Our application will also require an open-source library that makes it easy to generate and verify credentials across the various formats needed by our application. For our library, we will use [Veramo](https://veramo.io/).
-
-**Decentralized Storage**
-
-Finally, we will be using ComposeDB to both store credentials for our user, as well as retrieve credentials to verify.
-
-### Defining Our Schema
-
-If you [sign up](https://schemas.serto.id/) for a free account on Serto you'll have the ability to define credential schemas for your uses and applications. However, for our application, we will be using an existing schema definition called `Vetted Reviewer` (see the definition [here](https://schemas.serto.id/schema/vetted-reviewer)).
-
-
-
-
-
-
-
-The basic idea here is that an entity (such as a workplace or group of collaborators) might want to issue `Vetted Reviewer` instances to code reviewers they trust. As you'll notice, the data model is quite simple - we have an identifier for the credential, issuer, and recipient, as well as a date and a simple boolean field indicating whether the subject is or is not trusted.
-
-Serto also supplies a peek view into what the schema would look like in JSON format (though we will later create full instances that we can also inspect):
-
-
-
-
-
-
-
-## Setting Up Our Veramo Agent
-
-Now that we have our schema defined that we'll use for our credentials, we can start setting up the section of our application that uses Veramo.
-
-As outlined in the [Veramo Docs](https://veramo.io/docs/veramo_agent/introduction), our Veramo Agent will act as our application's interface for issuing and verifying credentials, as well as managing a DID that represents the application itself.
-
-We've already done most of this work for you, which we're about to walk through below.
-
-To get started, open a new terminal and clone the following repository:
-
-```bash
-git clone https://github.com/ceramicstudio/verifiable-credentials && cd verifiable-credentials
-```
-
-Once you have the repository opened in your text editor of choice, you'll see two sub-directories - one named `client` and the other named `express-veramo`. Let's first explore our `express-veramo` directory.
-
-This section of our application is meant to serve our Veramo Agent on an Express.js server that the `client` section of our application will use to generate and verify credentials.
-
-**Veramo Agent**
-
-If you open `/express-veramo/src/veramo/setup.ts` in your code editor, you can observe how we're instantiating our agent with all of the relevant plugin settings we'll need. For example, we've set our `defaultProvider` (on line 98 within our `DIDManager` instantiation) to "did:key". We've also enabled the ability to resolve "did:key" DIDs by adding a call to `keyDidResolver()` to our `DIDResolverPlugin` instantiation. Finally, we will be using a local SQLite instance (with TypeORM) to manage our Veramo Agent's DIDs and private keys.
-
-Our agent (with all our custom configurations) will then be available to import and use within each of our methods used to generate DIDs, create credentials, and verify credentials. For example, if you take a look at `/express-veramo/src/create-identifier.ts` you'll see how `main` uses our agent to access the `didManagerCreate` method on the agent's prototype chain.
-
-Finally, our `/express-veramo/index.ts` file exposes these agent methods as endpoints for our Express server.
-
-### Install Your Dependencies
-
-Install your dependencies from within the `/express-veramo` directory:
-
-```bash
-npm install
-```
-
-### Environment Variables
-
-You will notice that our Express application will need two environment variables.
-
-1. `INFURA_PROJECT_ID`: Simply go to [infura.io](https://www.infura.io/) and set up a new Web3 API key. Once set up, you will only need to copy the key itself (it should look something like "b45j76facf05112f9664778z1bf6bd50").
-2. `KMS_SECRET_KEY`: You can generate a KMS Secret Key using the Veramo CLI:
-
-```bash
-npx @veramo/cli config create-secret-key
-```
-
-### Generate Veramo DIDs
-
-We will also need to generate an admin DID that our Veramo Agent will use when generating credentials. This can be thought of as the admin seed representing our application:
-
-```bash
-npm run generate-id
-```
-
-We are now ready to start up our Express server! To begin, run the following in your terminal:
-
-```bash
-npm start
-```
-
-If all is successful so far, you should see the following in your terminal logs:
-
-`server started at http://localhost:8080`
-
-### Generate EIP-712 Signed Verifiable Credentials
-
-The [EIP712](https://eips.ethereum.org/EIPS/eip-712) standard allows wallets to display data in signing prompts in a readable and highly structured format and also happens to be a standard supported by Veramo's modular plugins. Unlike other formats, EIP712 requires `TypedData` (a JSON object containing type information, as well as domain separator parameters and the message).
-
-If you observe `/express-veramo/src/create-credential-712.ts` you'll notice that it looks almost identical to `/express-veramo/src/create-credential-jws.ts`. However, one major factor that will yield a very different output is the value for the "proofFormat" field (we use `EthereumEip712Signature2021` for our EIP712 instance).
-
-You'll also notice that the body of our credential references the schema we defined in Serto earlier, as well as the actual subject of the credential (indicating the recipient's identifier and whether they are trusted).
-
-In `/express-veramo/index.ts` you'll see that we're exposing this method on our /create endpoint.
-
-If you have Postman, you can open up a window and send a POST request to "http://localhost:8080/create" using a dummy input DID:
-
-
-
-
-
-
-
-You'll be able to view the output thereafter, which should look similar to this:
-
-```json
-{
- "issuer": "did:key:zQ3shjSvqxWu82TG8ARw6yZYvRhnAxi3MrDS7MoghVJLrUh1h",
- "@context": [
- "https://www.w3.org/2018/credentials/v1",
- "https://beta.api.schemas.serto.id/v1/public/vetted-reviewer/1.0/ld-context.json"
- ],
- "type": ["VerifiableCredential", "VettedReviewer"],
- "credentialSchema": {
- "id": "https://beta.api.schemas.serto.id/v1/public/vetted-reviewer/1.0/json-schema.json",
- "type": "JsonSchemaValidator2018"
- },
- "issuanceDate": "2023-10-24T22:10:31.906Z",
- "credentialSubject": {
- "isTrusted": true,
- "id": "did:pkh:eip155:1:0xc362c16a0dcbea78fb03a8f97f56deea905617bb"
- },
- "proof": {
- "verificationMethod": "did:key:zQ3shjSvqxWu82TG8ARw6yZYvRhnAxi3MrDS7MoghVJLrUh1h#zQ3shjSvqxWu82TG8ARw6yZYvRhnAxi3MrDS7MoghVJLrUh1h",
- "created": "2023-10-24T22:10:31.906Z",
- "proofPurpose": "assertionMethod",
- "type": "EthereumEip712Signature2021",
- "proofValue": "0xa090c41ba3a768ddf2695000ddc98009bf7dcddf9778e9d54cefcd3adbd7faaf08d4f2e9b31038112221a2d8dddbc1b0024488ea3b926400b767d1fc1ea4309b1b",
- "eip712": {
- "domain": {
- "chainId": 1,
- "name": "VerifiableCredential",
- "version": "1"
- },
- "types": {
- "EIP712Domain": [
- {
- "name": "name",
- "type": "string"
- },
- {
- "name": "version",
- "type": "string"
- },
- {
- "name": "chainId",
- "type": "uint256"
- }
- ],
- "CredentialSchema": [
- {
- "name": "id",
- "type": "string"
- },
- {
- "name": "type",
- "type": "string"
- }
- ],
- "CredentialSubject": [
- {
- "name": "id",
- "type": "string"
- },
- {
- "name": "isTrusted",
- "type": "bool"
- }
- ],
- "Proof": [
- {
- "name": "created",
- "type": "string"
- },
- {
- "name": "proofPurpose",
- "type": "string"
- },
- {
- "name": "type",
- "type": "string"
- },
- {
- "name": "verificationMethod",
- "type": "string"
- }
- ],
- "VerifiableCredential": [
- {
- "name": "@context",
- "type": "string[]"
- },
- {
- "name": "credentialSchema",
- "type": "CredentialSchema"
- },
- {
- "name": "credentialSubject",
- "type": "CredentialSubject"
- },
- {
- "name": "issuanceDate",
- "type": "string"
- },
- {
- "name": "issuer",
- "type": "string"
- },
- {
- "name": "proof",
- "type": "Proof"
- },
- {
- "name": "type",
- "type": "string[]"
- }
- ]
- },
- "primaryType": "VerifiableCredential"
- }
- }
-}
-```
-
-Notice how the `proof` key in our JSON output includes both a `proofValue` as well as all of the `TypedData` details required for the EIP712 format.
-
-If you send the same post request to the /create-jws endpoint, you will notice how the JSON web token output differs significantly:
-
-```json
-{
- "credentialSubject": {
- "isTrusted": true,
- "id": "did:pkh:eip155:1:0xc362c16a0dcbea78fb03a8f97f56deea905617bb"
- },
- "issuer": {
- "id": "did:key:z6MkrmuQWoiVynAchQiuVwrv8nt6dU3equSFJ3ZnuSnjhnkp"
- },
- "type": ["VerifiableCredential", "VettedReviewer"],
- "credentialSchema": {
- "id": "https://beta.api.schemas.serto.id/v1/public/vetted-reviewer/1.0/json-schema.json",
- "type": "JsonSchemaValidator2018"
- },
- "@context": [
- "https://www.w3.org/2018/credentials/v1",
- "https://beta.api.schemas.serto.id/v1/public/vetted-reviewer/1.0/ld-context.json"
- ],
- "issuanceDate": "2023-10-24T22:14:09.000Z",
- "proof": {
- "type": "JwtProof2020",
- "jwt": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJ2YyI6eyJAY29udGV4dCI6WyJodHRwczovL3d3dy53My5vcmcvMjAxOC9jcmVkZW50aWFscy92MSIsImh0dHBzOi8vYmV0YS5hcGkuc2NoZW1hcy5zZXJ0by5pZC92MS9wdWJsaWMvdmV0dGVkLXJldmlld2VyLzEuMC9sZC1jb250ZXh0Lmpzb24iXSwidHlwZSI6WyJWZXJpZmlhYmxlQ3JlZGVudGlhbCIsIlZldHRlZFJldmlld2VyIl0sImNyZWRlbnRpYWxTdWJqZWN0Ijp7ImlzVHJ1c3RlZCI6dHJ1ZX0sImNyZWRlbnRpYWxTY2hlbWEiOnsiaWQiOiJodHRwczovL2JldGEuYXBpLnNjaGVtYXMuc2VydG8uaWQvdjEvcHVibGljL3ZldHRlZC1yZXZpZXdlci8xLjAvanNvbi1zY2hlbWEuanNvbiIsInR5cGUiOiJKc29uU2NoZW1hVmFsaWRhdG9yMjAxOCJ9fSwic3ViIjoiZGlkOnBraDplaXAxNTU6MToweGMzNjJjMTZhMGRjYmVhNzhmYjAzYThmOTdmNTZkZWVhOTA1NjE3YmIiLCJuYmYiOjE2OTgxODU2NDksImlzcyI6ImRpZDprZXk6ejZNa3JtdVFXb2lWeW5BY2hRaXVWd3J2OG50NmRVM2VxdVNGSjNabnVTbmpobmtwIn0.rAhjw1_bkvY9QNSTJsoWHnsYU4ccYHngJ36x6gv567DEp85QGpz3zcKbrJAIBEdvR76C5-FcF6tSKk6TnhiADQ"
- }
-}
-```
-
-While both versions can be reliably stored and later verified, the JWT implementation compacts all of the necessary credential data and signatures into a single field.
-
-For this guide, we will show you how to store both JWT and EIP712 Verifiable Credentials on ComposeDB, and later how to reconstruct and verify them using our Veramo agent.
-
-## ComposeDB Server and Client Setup
-
-You can leave your Express server running as we begin exploring the `client` section of the application, starting with the data models we'll need for storing our Verifiable Credentials.
-
-If you open your `/client/composites` directory in your text editor, you'll find a GraphQL file that encompasses the interfaces and types we'll need for our VC data class:
-
-```graphql
-# 00-verifiableCredential.graphql
-
-## our overarching VC interface that acts agnostic of our proof type
-interface VerifiableCredential @createModel(description: "A verifiable credential interface") {
- controller: DID! @documentAccount
- issuer: Issuer!
- context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- credentialSchema: CredentialSchema!
- credentialStatus: CredentialStatus
- issuanceDate: DateTime!
- expirationDate: DateTime
-}
-
-type Issuer {
- id: String! @string(maxLength: 1000)
- name: String @string(maxLength: 1000)
-}
-
-type CredentialStatus {
- id: String! @string(maxLength: 1000)
- type: String! @string(maxLength: 1000)
-}
-
-type CredentialSchema {
- id: String! @string(maxLength: 1000)
- type: String! @string(maxLength: 1000)
-}
-
-## we'll use interfaces for our proof types to generalize them as well - this one's for EIP712
-interface VCEIP712Proof implements VerifiableCredential
- @createModel(description: "A verifiable credential interface of type EIP712") {
- controller: DID! @documentAccount
- issuer: Issuer!
- context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- credentialSchema: CredentialSchema!
- credentialStatus: CredentialStatus
- issuanceDate: DateTime!
- expirationDate: DateTime
- proof: ProofEIP712!
-}
-
-## generalized JWT proof interface
-interface VCJWTProof implements VerifiableCredential
- @createModel(description: "A verifiable credential interface of type JWT") {
- controller: DID! @documentAccount
- issuer: Issuer!
- context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- credentialSchema: CredentialSchema!
- credentialStatus: CredentialStatus
- issuanceDate: DateTime!
- expirationDate: DateTime
- proof: ProofJWT!
-}
-
-type ProofEIP712 {
- verificationMethod: String! @string(maxLength: 1000)
- created: DateTime!
- proofPurpose: String! @string(maxLength: 1000)
- type: String! @string(maxLength: 1000)
- proofValue: String! @string(maxLength: 1000)
- eip712: EIP712!
-}
-
-type ProofJWT {
- type: String! @string(maxLength: 1000)
- jwt: String! @string(maxLength: 100000)
-}
-
-type EIP712 {
- domain: Domain!
- types: ProofTypes!
- primaryType: String! @string(maxLength: 100)
-}
-
-type Types {
- name: String! @string(maxLength: 100)
- type: String! @string(maxLength: 100)
-}
-
-type ProofTypes {
- EIP712Domain: [Types!]! @list(maxLength: 100)
- CredentialSchema: [Types!]! @list(maxLength: 100)
- CredentialSubject: [Types!]! @list(maxLength: 100)
- Proof: [Types!]! @list(maxLength: 100)
- VerifiableCredential: [Types!]! @list(maxLength: 100)
-}
-
-type Domain {
- chainId: Int!
- name: String! @string(maxLength: 100)
- version: String! @string(maxLength: 100)
-}
-
-type CredentialSubject {
- id: DID! @accountReference
- isTrusted: Boolean!
-}
-
-## define our EIP712 type that uses a hard-coded credentialSubject specific to our use case
-type VerifiableCredentialEIP712 implements VerifiableCredential & VCEIP712Proof
- @createModel(accountRelation: LIST, description: "A verifiable credential of type EIP712")
- @createIndex(fields: [{ path: "issuanceDate" }])
- @createIndex(fields: [{ path: "issuer" }]) {
- controller: DID! @documentAccount
- issuer: Issuer!
- context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- credentialSchema: CredentialSchema!
- credentialStatus: CredentialStatus
- issuanceDate: DateTime!
- expirationDate: DateTime
- proof: ProofEIP712!
- credentialSubject: CredentialSubject!
-}
-
-## define our JWT type that uses a hard-coded credentialSubject specific to our use case
-type VerifiableCredentialJWT implements VerifiableCredential & VCJWTProof
- @createModel(accountRelation: LIST, description: "A verifiable credential of type JWT")
- @createIndex(fields: [{ path: "issuanceDate" }])
- @createIndex(fields: [{ path: "issuer" }]) {
- controller: DID! @documentAccount
- issuer: Issuer!
- context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
- credentialSchema: CredentialSchema!
- credentialStatus: CredentialStatus
- issuanceDate: DateTime!
- expirationDate: DateTime
- proof: ProofJWT!
- credentialSubject: CredentialSubject!
-}
-```
-
-What we've done here is generalized both our `VerifiableCredential` schema by using an interface, as well as their corresponding proof types. This allows us to query as general as we want (with our entrypoint on the proof-agnostic `VerifiableCredential` interface), as well as looking specifically for the `VerifiableCredentialEIP712` and `VerifiableCredentialJWT` types that contain `credentialSubject` fields specifically tuned for our use-case.
-
-### Model Instance Controllers
-
-This example application was also intentionally set up to display two different ways a development team might choose to implement model instance ownership. More specifically, it may make sense (under certain situations) for the application itself to have the exclusive ability to change model instance documents after they are created. Conversely, as mentioned toward the beginning of this guide, developers may instead allow their users to retain control of each credential instance (since they know that it would be easy to tell whether a credential has been tampered with anyway).
-
-The dummy UI entails two pages found at `/client/src/pages/index.tsx` and `/client/src/pages/jwt.tsx` which respectively use the components found at `/client/src/components/VC712.tsx` and `/client/src/components/VCJwt.tsx`. You'll notice how the VC712 component calls an API within the `createCredential` method found at `/api/create` (after obtaining an EIP712 Verifiable Credential from our Express server).
-
-If you take a look into `/client/src/pages/api/create.ts`, you'll find the corresponding route definition. Notice how we call an `authenticateDID` method before running a mutation query to authenticate a static seed environment variable (which represents our application's DID).
-
-Conversely, you'll notice how the `createCredential` method within our `/client/src/components/VCJwt.tsx` component executes a mutation query on our `compose` instance. If you dig a bit deeper, you'll notice that we (as the individual user) are already authenticated on the `ComposeClient` instance (imported from `/client/src/fragments/index.tsx`).
-
-### Getting Started
-
-To get started in the client section, we'll first have to install our dependencies (be sure to cd into your `client` directory):
-
-```bash
-npm install
-```
-
-Next, we will need to generate an admin seed and ComposeDB configuration our application will use. This example repository contains a script found at `/client/scripts/commands/mjs` that generates one for you (preset to run "inmemory" which is ideal for testing).
-
-To generate your necessary credentials, run the following in your terminal:
-
-```bash
-npm run generate
-```
-
-Finally, you will need to create a .env file with a SECRET_KEY - this is what our `/client/src/pages/api/create` route will use to authenticate us as the developer on our ComposeClient instance (this must be 32 bytes and must be different from our admin_seed.txt).
-
-Feel free to copy-paste this dummy seed into your .env file:
-
-```bash
-SECRET_KEY="11b574d316903ced6cc3f4787bbcc3047d9c72d1da4d83e36fe714ef7891jb50"
-```
-
-Finally, go ahead and start your application in development mode (switch to node v16 first):
-
-```bash
-nvm use 16
-npm run dev
-```
-
-### Interacting with the UI
-
-If you have been following along up until this point, you should be able to access the UI in your browser on port 3000. Go ahead and connect your wallet using the Web3Modal:
-
-
-
-While our UI in this context does not illustrate a setup that would be used in production, readers following this guide should imagine an application's flow whereby users log in, exhibit behavior by performing tasks, and receive verifiable credentials signed by the application.
-
-In our case, we're mimicking this behavior with the simple push of a button, which you'll see after logging in:
-
-
-
-Jumping back to your text editor, you'll see how the `Generate Verifiable Credential` button click is tied to the `createCredential` method found in `/client/src/components/VC712.tsx`. This method then sends a fetch request to our Express server running on port 8080 with the `/create` route, thus invoking a response from the corresponding `createCredential` method at `/express-veramo/src/create-credential-712.ts`. You'll also notice how we're sending over our user's "did:pkh" DID (which we saved in our local storage for easy access) to be used for the `id` field in our `credentialSubject` key within our credential.
-
-Finally, back in our React component, you'll see how we use the result of this fetch request to hit our endpoint at `/api/create`, which generates a model instance document using our application as the controller DID (which we discussed above).
-
-Go ahead and click `Generate Verifiable Credential` to see this in action.
-
-
-
-If you generated a few dummy credentials using Postman in the previous section, you'll notice that the output looks the same. You should also now see your own "did:pkh" appear within the `credentialSubject` key.
-
-### Creating JWT Credentials
-
-If you navigate to `localhost:3000/jwt` in your browser, you will be able to generate credentials using JWTs (and with yourself, the user, as the model instance controller in ComposeDB):
-
-
-
-
-
-
-
-### Verifying Credentials
-
-One item we haven't yet discussed is the process of verifying credentials. Given how this process entails querying ComposeDB and reconstructing a credential instance for verification, we should also discuss what changes we had to make between the original credential output from Veramo and the saved result in ComposeDB.
-
-You'll notice that both of our ComposeDB schema definitions save the entirety of the Veramo credential output except for changing the name of the "@context" key to "context" (given field naming constraints in GraphQL). We therefore replace the name of this key with "context" when running our mutation query, but must also reconstruct it when querying ComposeDB and reconstructing our credential.
-
-Given that you're already on the `localhost:3000/jwt` page, we'll jump into its corresponding component at `/client/src/components/VCJwt.tsx`. You'll notice how our `verifyCredential` method (tied to the `Verify JWT Credential` button) adds an `@context` key-value pair to our `final` object (and deleting the `context` pair) before hitting our Express server at `/verify-jws`.
-
-If you take a look at `/express-veramo/src/verify-credential-jws.ts` you'll see how we deconstruct our payload during the verification process (you can also see how this is done for our EIP712 verification route at `/express-veramo/src/verify-credential-712.ts`).
-
-Go ahead and click "Verify JWT Credential" if you still have `localhost:3000/jwt` open in your browser:
-
-
-
-
-
-
-
-Back in our `/client/src/components/VCJwt` component you'll notice that we first make a fetch request to `/api/query-jws` route (found at `/client/src/pages/api/query-jws`) to grab the most recent `VerifiableCredentialJWS` instance from our ComposeClient. Note that in a production setting when dealing with multiple users and many instance documents, your query might instead filter based on the user's DID.
-
-Finally, our `VCJwt` component then hits our `verify-jws` Express endpoint we just discussed to retrieve a response from our Veramo agent.
-
-### Querying interfaces
-
-Back in your browser window you can access an integrated instance of GraphiQL by visiting `http://localhost:3000/query`:
-
-
-
-If you look at the query that's been generated for you, you'll see that we're using the general `VerifiableCredential` interface as the entrypoint, while still being able to grab corresponding proofs that are of EIP712 and JWT types:
-
-```graphql
-query VerifiableCredentialsAll {
- verifiableCredentialIndex(first: 10) {
- edges {
- node {
- controller {
- id
- }
- issuer {
- id
- }
- context
- type
- credentialSchema {
- id
- type
- }
- issuanceDate
- ... on VCEIP712Proof {
- proof {
- verificationMethod
- created
- proofPurpose
- type
- proofValue
- eip712 {
- domain {
- chainId
- name
- version
- }
- types {
- EIP712Domain {
- name
- type
- }
- CredentialSchema {
- name
- type
- }
- CredentialSubject {
- name
- type
- }
- Proof {
- name
- type
- }
- VerifiableCredential {
- name
- type
- }
- }
- primaryType
- }
- }
- }
- ... on VCJWTProof {
- proof {
- type
- jwt
- }
- }
- }
- }
- }
-}
-```
-
-Feel free to experiment further with GraphiQL and test out other interface entrypoints, or by specific known StreamIDs or DIDs.
-
-### Next Steps
-
-We hope you've enjoyed this example implementation of using ComposeDB to save and retrieve Verifiable Credentials. However, you may be wondering what else is possible in the realm of verifiable claims, or you may have a need to allow your users to encrypt their claims when saving to Ceramic.
-
-Here are a few resources you might find useful as you continue to discover what's possible with ComposeDB:
-
-- [**Creating Attestations with EAS**](https://docs.attest.sh/docs/tutorials/ceramic-storage) - Learn how to use the Ethereum Attestation Service to generate a different class of verifiable claims called "Attestations".
-- [**Lit Protocol with ComposeDB**](https://developer.litprotocol.com/v3/integrations/storage/ceramic-example) - Follow a tutorial that shows how to use Lit Protocol for access control together with ComposeDB.
-- [**Encrypted Data on ComposeDB**](https://blog.ceramic.network/tutorial-encrypted-data-on-composedb/) - Learn yet another way to encrypt and decrypt data on ComposeDB by generating an `encryptionDid` instance.
diff --git a/docs/composedb/getting-started.mdx b/docs/composedb/getting-started.mdx
deleted file mode 100644
index 4ace20a9..00000000
--- a/docs/composedb/getting-started.mdx
+++ /dev/null
@@ -1,46 +0,0 @@
-# Getting Started
-
-## What is Ceramic?
-
-Ceramic is a shared data network for managing verifiable data at scale, combining **the trust and composability of a blockchain** with **the flexibility of an event-driven architecture** to help organizations get more value from their data.
-
-Ceramic provides developers with a shared data network that offers verifiable trust and interoperability, allowing them to leverage reusable data models, collective network effects and modular applications so they can focus on building and growing their unique vision.
-
-## What is ComposeDB?
-
-ComposeDB is a composable graph database built on Ceramic, designed for Web3 applications.
-
-ComposeDB on Ceramic stores and manages data while delivering fast queries and a catalog of plug-and-play data models. Developers, data scientists, and architects use it as the graph data layer for Web3.
-
-### Why ComposeDB?
-
-- Store and query data with powerful, easy-to-use GraphQL APIs
-- Build faster with a catalog of plug-and-play schemas
-- Bootstrap content by plugging into a composable data ecosystem
-- Deliver great UX with sign-in with Ethereum, Solana, and more
-- Eliminate trust and guarantee data verifiability
-- Scale your Web3 data infrastructure beyond L1 or L2 blockchains
-
-### What are the Common Use Cases?
-
-- **Decentralized identity** - user profiles, credentials, reputation systems
-- **Web3 social** - social graphs, posts, reactions, comments, messages
-- **DAO tools** - proposals, projects, tasks, votes, contribution graphs
-- **Open information graphs** - DeSci graphs, knowledge graphs, discourse graphs
-
-## How to Get Started
-
-- [**Create your first Ceramic app**](./create-ceramic-app) - As easy as running `npx create-ceramic-app`, run your first local Ceramic app and start diving into code.
-- [**Set up your environment**](./set-up-your-environment) - Learn how to set up your development environment to start building with ComposeDB. Experience the real Ceramic network, either on the testnet or mainnet.
-- [**Create your composite**](./create-your-composite) - Learn how to create your first composite, a reusable data model that can be used across different applications.
-- [**Interact with data**](./interact-with-data) - Learn how to interact with data in ComposeDB, from creating, reading, updating, and deleting data to running complex queries.
-- [**Core ComposeDB concepts**](./core-concepts) - Learn about the core concepts of ComposeDB, such as composites, schemas, and queries.
-
-## Join Ceramic Community 💜
-
-Ceramic has a large and active community of developers. Here's how you can connect with us:
-
-- [**Forum**](https://forum.ceramic.network) - The best place to ask questions and to search for answers.
-- [**Discord**](https://chat.ceramic.network) - Join the conversation with other developers and the Ceramic team.
-- [**Twitter**](https://twitter.com/ceramicnetwork) - Follow us on Twitter for updates and announcements.
-- [**GitHub**](https://github.com/ceramicnetwork/) - Check out the Ceramic GitHub organization to find all the repositories and projects.
diff --git a/docs/composedb/guides/composedb-client/authenticate-users.mdx b/docs/composedb/guides/composedb-client/authenticate-users.mdx
deleted file mode 100644
index f02bfc17..00000000
--- a/docs/composedb/guides/composedb-client/authenticate-users.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
-# Authenticate Users
-Set up authentication for your ComposeDB application.
-
-## Introduction
-in ComposeDB, authentication is needed to [enable mutations](../../guides/data-interactions/mutations.mdx) on data controlled by a user’s account.
-
-## Get Started
-
-### Enable sessions
-
-Enable users to create an authenticated session using their blockchain wallet.
-
-[**User Sessions →**](./user-sessions.mdx)
\ No newline at end of file
diff --git a/docs/composedb/guides/composedb-client/composedb-client.mdx b/docs/composedb/guides/composedb-client/composedb-client.mdx
deleted file mode 100644
index 78158722..00000000
--- a/docs/composedb/guides/composedb-client/composedb-client.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
-# ComposeDB Client
-
-Connect your app to a ComposeDB server
-
-## Connect your application
-
-Interact with ComposeDB using JavaScript, TypeScript, or React
-
-**[JavaScript Client →](./javascript-client.mdx)**
-
-## Authenticate users
-
-Enable user interactions, including data mutations
-
-**[Authenticate Users →](./authenticate-users.mdx)**
diff --git a/docs/composedb/guides/composedb-client/javascript-client.mdx b/docs/composedb/guides/composedb-client/javascript-client.mdx
deleted file mode 100644
index c3b84794..00000000
--- a/docs/composedb/guides/composedb-client/javascript-client.mdx
+++ /dev/null
@@ -1,174 +0,0 @@
-import Tabs from '@theme/Tabs'
-import TabItem from '@theme/TabItem'
-import DocCardList from '@theme/DocCardList';
-
-# JavaScript Client
-APIs to interact with ComposeDB from JavaScript, TypeScript, or React.
-
-## Prerequisites
----
-- A [compiled composite](../../guides/data-modeling/composites.mdx#compiling-composites)
-
-## Installation
----
-Install the ComposeDB client package:
-
-
-
-
-```bash
-npm install @composedb/client
-```
-
-
-
-
-```bash
-pnpm add @composedb/client
-```
-
-
-
-
-```bash
-yarn add @composedb/client
-```
-
-
-
-
-If you’re using TypeScript, you may also need to install ComposeDB Types:
-
-
-
-
-```bash
-npm install -D @composedb/types
-```
-
-
-
-
-```bash
-pnpm add -D @composedb/types
-```
-
-
-
-
-```bash
-yarn add -D @composedb/types
-```
-
-
-
-
-
-## Configuration
----
-Create a client instance by passing your server URL and your compiled composite:
-
-```jsx
-// Import ComposeDB client
-
-import { ComposeClient }from '@composedb/client'
-
-// Import your compiled composite
-
-import { definition }from './__generated__/definition.js'
-
-// Create an instance of ComposeClient
-// Pass the URL of your Ceramic server
-// Pass reference to your compiled composite
-
-const compose = new ComposeClient({ ceramic: 'http://localhost:7007', definition })
-
-```
-
-More details: [`ComposeClient`](https://composedb.js.org/docs/0.5.x/api/classes/client.ComposeClient)
-
-## Queries
----
-### Executing Queries
-
-Execute GraphQL [Queries](../../guides/data-interactions/queries.mdx) using the schema that is auto-generated from your compiled composite:
-
-```jsx
-// Get account of authenticated user
-
-await compose.executeQuery(`
- query {
- viewer {
- id
- }
- }
-`)
-
-```
-
-More details: [`executeQuery`](https://composedb.js.org/docs/0.5.x/api/classes/client.ComposeClient#executequery)
-
-## Mutations
----
-### Enabling Mutations
-
-:::tip
-Before enabling mutations you must [authenticate the user](./../composedb-client/authenticate-users.mdx).
-:::
-
-After you have an authenticated user, enable [mutations](../../guides/data-interactions/mutations.mdx) by setting their authenticated account on the ComposeDB client:
-
-
-
-
-```jsx
-// Assign the authorized did from your session to your client
-
-compose.setDID(session.did)
-```
-
-
-
-
-```jsx
-// Call setDID method on ComposeClient instance
-// Using authenticated did instance
-
-compose.setDID(did)
-```
-
-
-
-
-### Executing mutations
-In your client, you can execute GraphQL mutations using the schema that is auto-generated from your compiled composite. Follow examples in the [mutations](../../guides/data-interactions/mutations.mdx) guide.
-
-## Next Steps
----
-Learn how to [**Authenticate Users →**](./../composedb-client/authenticate-users.mdx)
-
-## Related Guides
----
-ComposeDB’s JavaScript client optionally works with popular GraphQL clients:
-
-
\ No newline at end of file
diff --git a/docs/composedb/guides/composedb-client/user-sessions.mdx b/docs/composedb/guides/composedb-client/user-sessions.mdx
deleted file mode 100644
index 4b698539..00000000
--- a/docs/composedb/guides/composedb-client/user-sessions.mdx
+++ /dev/null
@@ -1,230 +0,0 @@
-import Tabs from '@theme/Tabs'
-import TabItem from '@theme/TabItem'
-
-# User Sessions
-
-Create authenticated sessions for users with great UX.
-
-## About Sessions
----
-
-- Sessions provide a familiar, web2-like authentication experience for Ceramic apps where a user signs-in once for a timebound session and then interacts without needing to manually approve every transaction
-- A durable root Ceramic account (did:pkh) is generated based on the user’s blockchain wallet
-- The root account generates a temporary Ceramic account (did:key) for each app with tightly-scoped permissions that only lives for a period of time in the user’s browser
-
-## Installation
----
-
-First, install the did-sessions library:
-
-
-
-
-```bash
-npm install did-session
-```
-
-
-
-
-```bash
-pnpm add did-session
-```
-
-
-
-
-```bash
-yarn add did-session
-```
-
-
-
-
-Then, install the appropriate blockchain wallet module:
-
-
-
-
-```bash
-# For Ethereum accounts
-npm install @didtools/pkh-ethereum
-
-# For Solana accounts
-npm install @didtools/pkh-solana
-```
-
-
-
-
-```bash
-# For Ethereum accounts
-pnpm add @didtools/pkh-ethereum
-
-# For Solana accounts
-pnpm add @didtools/pkh-solana
-```
-
-
-
-
-```bash
-# For Ethereum accounts
-yarn add @didtools/pkh-ethereum
-
-# For Solana accounts
-yarn add @didtools/pkh-solana
-```
-
-
-
-
-## Authorization
----
-
-### Ethereum Wallets
-
-Authorize with an Ethereum account using [@didtools/pkh-ethereum](https://did.js.org/docs/api/modules/pkh_ethereum):
-
-```jsx
-import { DIDSession } from 'did-session'
-import { EthereumWebAuth, getAccountId } from '@didtools/pkh-ethereum'
-import { ComposeClient }from '@composedb/client'
-
-const ethProvider = // import/get your web3 eth provider
-const addresses = await ethProvider.request({ method: 'eth_requestAccounts' })
-const accountId = await getAccountId(ethProvider, addresses[0])
-const authMethod = await EthereumWebAuth.getAuthMethod(ethprovider, accountId)
-
-const compose = new ComposeClient()
-
-const session = await DIDSession.get(accountId, authMethod, { resources: compose.resources})
-compose.setDID(session.did)
-```
-
-### Solana Wallets
-
-Authorize with a Solana account using [@didtools/pkh-solana](https://did.js.org/docs/api/modules/pkh_solana):
-
-```jsx
-import { DIDSession } from 'did-session'
-import { SolanaWebAuth, getAccountIdByNetwork } from '@didtools/pkh-solana'
-import { ComposeClient }from '@composedb/client'
-
-const solProvider = // import/get your Solana provider (ie: window.phantom.solana)
-const address = await solProvider.connect()
-const accountId = getAccountIdByNetwork('mainnet', address.publicKey.toString())
-const authMethod = await SolanaWebAuth.getAuthMethod(solProvider, accountId)
-
-const compose = new ComposeClient()
-
-const session = await DIDSession.get(accountId, authMethod, { resources: compose.resources})
-compose.setDID(session.did)
-```
-
-:::tip
-Additional chain support is continually being added. You can find the link to each chain and its docs below.
-
-- [Tezos](https://did.js.org/docs/api/modules/pkh_tezos)
- :::
-
-## Scopes
----
-
-In the last line of the above examples, you see a `resources` array. This is effectively a scope of permission that the user is assigning. In ComposeDB, these resources are the models you’ve included in your composite.
-
-The compose client offers a simple getter `compose.resources` that formats all model streamIDs in your composite for did-session. You can then pass this as a configuration option.
-
-The `compose.resources` is an array of URI-formatted streamIDs of models, for example:
-
-```jsx
-;[
- 'ceramic://*?model=kjzl6hvfrbw6c5ajfmes842lu09vjxu5956e3xq0xk12gp2jcf9s90cagt2god9',
- 'ceramic://*?model=kjzl6hvfrbw6c99mdfpjx1z3fue7sesgua6gsl1vu97229lq56344zu9bawnf96',
-]
-```
-
-### Session Lifecycle
-
-Additional helper functions are available to help you manage a session lifecycle and the user experience.
-
-```jsx
-// Check if authorized or created from existing session string
-didsession.hasSession
-
-// Check if session expired
-didsession.isExpired
-
-// Get resources session is authorized for
-didsession.authorizations
-
-// Check number of seconds till expiration, may want to re auth user at a time before expiration
-didsession.expiresInSecs
-```
-
-### Complete Example
-
-A typical usage pattern is to store sessions in localstorage, when a user loads your app you can load an existing session or create a new one. When you start making mutations with the client instance, you should make sure that the session is not expired.
-
-:::caution
-
-LocalStorage is used for illustrative purposes here and may not be best for your app, as there is a number of known issues with storing secret material in browser storage. The session string allows anyone with access to that string to make writes for that user for the time and resources that session is valid for. How that session string is stored and managed is the responsibility of the application.
-
-:::
-
-```jsx
-import { DIDSession } from 'did-session'
-import type { AuthMethod } from '@didtools/cacao'
-import { EthereumWebAuth, getAccountId } from '@didtools/pkh-ethereum'
-import { ComposeClient }from '@composedb/client'
-
-const ethProvider = // import/get your web3 eth provider
-const addresses = await ethProvider.request({ method: 'eth_requestAccounts' })
-const accountId = await getAccountId(ethProvider, addresses[0])
-const authMethod = await EthereumWebAuth.getAuthMethod(ethProvider, accountId)
-
-const compose = new ComposeClient()
-
-const session = await DIDSession.get(accountId, authMethod, { resources: compose.resources})
-compose.setDID(session.did)
-
-// pass ceramic instance where needed, ie glaze
-// ...
-
-// Before mutations, check if a session is still valid, if expired, create new
-if (session.isExpired) {
- const session = await DIDSession.get(accountId, authMethod, { resources: compose.resources})
- compose.setDID(session.did)
-}
-
-// perform mutations, continue to use compose client
-```
-
-## Modal
----
-
-When authenticating, the user is prompted with a human-readable wallet modal that explains what they’re giving permissions for. To accomplish this integration, the DID Session library supports Sign-In With X, a chain agnostic authorization standard. Sign-In With Ethereum is shown below:
-
-
-
-## Next Steps
----
-
-With authenticated users, now you can move to:
-
-1. Setting up [**Data Interactions**](../../guides/data-interactions/data-interactions.mdx)
-2. Being able to use [**mutations**](../../guides/data-interactions/mutations.mdx) with ComposeDB client
diff --git a/docs/composedb/guides/composedb-client/using-apollo.mdx b/docs/composedb/guides/composedb-client/using-apollo.mdx
deleted file mode 100644
index d4e1eb62..00000000
--- a/docs/composedb/guides/composedb-client/using-apollo.mdx
+++ /dev/null
@@ -1,38 +0,0 @@
-# Using Apollo GraphQL Client
-[Apollo](https://www.apollographql.com/docs/react/api/core/ApolloClient) is a popular GraphQL client for React and other platforms.
-
-## Prerequisites
-- Install the [`composedb`](../../set-up-your-environment.mdx) packages
-- Install `@apollo/client`
-- A compiled composite
-
-## Usage
-ComposeDB client can be used with the [Apollo client](https://www.apollographql.com/docs/react/api/core/ApolloClient) by creating a custom [Apollo link](https://www.apollographql.com/docs/react/api/link/introduction), as shown in the example below:
-
-```jsx
-import { ApolloClient, ApolloLink, InMemoryCache, Observable } from '@apollo/client'
-import { ComposeClient } from '@composedb/client'
-
-// Path to compiled composite
-import { definition } from './__generated__/definition.js'
-
-const compose = new ComposeClient({ ceramic: 'http://localhost:7007', definition })
-
-// Create custom ApolloLink using ComposeClient instance to execute operations
-const link = new ApolloLink((operation) => {
- return new Observable((observer) => {
- compose.execute(operation.query, operation.variables).then(
- (result) => {
- observer.next(result)
- observer.complete()
- },
- (error) => {
- observer.error(error)
- }
- )
- })
-})
-
-// Use ApolloLink instance in ApolloClient config
-export const client = new ApolloClient({ cache: new InMemoryCache(), link })
-```
diff --git a/docs/composedb/guides/composedb-client/using-relay.mdx b/docs/composedb/guides/composedb-client/using-relay.mdx
deleted file mode 100644
index fb5eb08b..00000000
--- a/docs/composedb/guides/composedb-client/using-relay.mdx
+++ /dev/null
@@ -1,28 +0,0 @@
-# Using Relay GraphQL Client
-[Relay](https://relay.dev/) is a popular GraphQL client for React.
-
-## Prerequisites
-- Install the [`composedb`](../../set-up-your-environment.mdx) packages
-- Install the `relay-runtime`package
-- A compiled composite
-
-## Usage
-The ComposeDB client can be used with Relay by creating a custom [network layer](https://relay.dev/docs/guides/network-layer/), as shown:
-
-```jsx
-import { ComposeClient } from '@composedb/client'
-import { Environment, Network, RecordSource, Store } from 'relay-runtime'
-
-// Path to compiled composite
-import { definition } from './__generated__/definition.js'
-
-const compose = new ComposeClient({ ceramic: 'http://localhost:7007', definition })
-
-// Create custom Network using ComposeClient instance to execute operations
-const network = Network.create(async (request, variables) => {
- return await client.executeQuery(request.text, variables)
-})
-
-// Use created Network instance to create Relay Environment
-export const environment = new Environment({ network, store: new Store(new RecordSource()) })
-```
\ No newline at end of file
diff --git a/docs/composedb/guides/composedb-server/access-mainnet.mdx b/docs/composedb/guides/composedb-server/access-mainnet.mdx
deleted file mode 100644
index 5268970b..00000000
--- a/docs/composedb/guides/composedb-server/access-mainnet.mdx
+++ /dev/null
@@ -1,154 +0,0 @@
-# Access Ceramic Mainnet
-
-To join mainnet, you must register your Ceramic node with the 3Box Labs’ Ceramic Anchor Service (CAS). The job of CAS is to anchor Ceramic streams on the Ethereum blockchain so your node will not work without access to CAS.
-
-To register you will need (1) a valid email address and (2) the DID used by your Ceramic daemon.
-
-:::caution
-
-IMPORTANT: The daemon config file should be considered secret and should not be shared because it will store a private seed used to authenticate your node to the 3Box Labs Ceramic Anchor Service.
-
-If you need to share your node’s configuration, you can safely copy and paste it from your daemon startup logs, which always excludes the private seed url.
-
-:::
-
-## Step 1. Start your node and copy your Key DID
-
-If this is your first time starting a Ceramic node, a random private seed url will be generated for you. The seed in this url is used to create a [Key DID](../../../protocol/js-ceramic/accounts/decentralized-identifiers#key-did) for your Ceramic node. When you start the daemon, it will display the Key DID in the console logs like the one below.
-
-```bash
-IMPORTANT: DID set to 'did:key:z6MkppuNZjR4QR8rxrPv4ejbGqgUcwwmxse47efsB3C1XnaM'
-```
-
-Copy the quoted DID so you can use it later.
-
-## Step 2. Verify your email address
-
-A valid email address is required so that you have a way to control the Ceramic nodes that are given access to the 3Box Labs anchor service. Using this email you will be able to register or revoke DIDs for your nodes.
-
-```bash
-> curl --request POST \
- --url https://cas.3boxlabs.com/api/v0/auth/verification \
- --header 'Content-Type: application/json' \
- --data '{"email": "youremailaddress"}'
-```
-
-You should see a response that says `"Please check your email for your verification code."`
-
-Now check your email and copy the one time passcode enclosed within. It will be a string of letters and numbers _similar_ to this: **`2451cc10-5a39-494d-b8eb-1971ecd813de`.**
-
-## Step 3. Register your DID
-
-Use your DID, the one time passcode, and the same email address, to register your DID.
-
-```bash
-> curl --request POST \
- --url https://cas.3boxlabs.com/api/v0/auth/did \
- --header 'Content-Type: application/json' \
- --data '{
- "email": "youremailaddress",
- "otp": "youronetimepasscode",
- "dids": [
- "yourdid"
- ]
- }'
-```
-
-You should see a response that says `[{"email":"youremailaddress","did":"yourdid","nonce":"0","status":"Active"}]`.
-
-Finally, start your Ceramic node again. You will know that the DID registration was successful if you see logs in the console like the ones below.
-
-```bash
-IMPORTANT: Connected to anchor service 'https://cas.3boxlabs.com' with supported anchor chains ['eip155:1']
-IMPORTANT: Ceramic API running on 0.0.0.0:7007'
-```
-
-## Existing Node Operators
-
-Already running a node? Learn how to upgrade.
-
-If you are already running a Ceramic daemon connected to mainnet, you have been using IP address based authentication to connect to the 3Box Labs mainnet CAS. You are not required to re-register or make any changes to your daemon, however please note that we will be deprecating IP address based authentication in the future. To prepare for deprecation, we recommend updating your daemon config file to use DID based authentication, then registering your DID with the steps above.
-
-If you have run a Ceramic daemon before but have not yet connected to mainnet, you must update your daemon config file to use DID based authentication, then register your DID with the steps above.
-
-### Updating to DID based authentication
-
-First you will need to generate a private seed. You can do this with the ComposeDB CLI.
-
-```bash
-> composedb did:generate-private-key
-✔ Generating random private key... Done!
-99918d7f36991ec38d76e1cf21d14c5348d1513512c957d0b809efbf3ad18983
-```
-
-Copy the string of numbers and letters logged to the console. You will use them BOTH in the daemon.config.json below AND to generate a DID as follows:
-
-```bash
-> composedb did:from-private-key #private_seed_copied_from_above
-✔ Creating DID... Done!
-
-Admin DID: did:key:z6Mkq1r4LAsQTjCN7EBTnGf7DorL28aZ4eb6akcLwJSwygBt
-
-# Use this DID in the steps above!
-```
-
-:::note
-
-As a security best practice, do not use any private key or seed more than once.
-
-:::
-
-Update your daemon.config.json file to set your anchor auth method and node private seed url.
-
-```bash
- "anchor": {
- "auth-method": "did"
- },
-```
-
-```bash
- "node": {
- "privateSeedUrl": "inplace:ed25519#private_seed_copied_from_above"
- },
-```
-
-Save the file and follow the steps above to register your DID.
-
-## Rate Limits
-
-By default, requests to CAS are capped at 200 requests per second, 130 concurrent requests, and 10,000,000 requests per week.
-
-For larger apps, we can increase the cap to 600 requests per second, 400 concurrent requests, and 300,000,000 requests per week.
-
-Interested in larger caps for your app? Reach out to [partners@3box.io](mailto:partners@3box.io) to discuss directly with our team.
-
-As we improve scalability, expect rate limiting to be removed.
-
-## Revoking a DID
-If your private seed has been compromised or lost you should revoke your DID and generate a new one so that your daemon can not be impersonated. Each Ceramic daemon needs to have a unique DID in order for streams to be anchored correctly, so it is important that the private seed used to generate the DID is only used in one place.
-
-To revoke your DID you will need the email address you used to register the DID.
-
-### Step 1. Verify your email address
-
-```bash
-> curl --request POST \
- --url https://cas.3boxlabs.com/api/v0/auth/verification \
- --header 'Content-Type: application/json' \
- --data '{"email": "youremailaddress"}'
-```
-
-Now check your email and copy the one time passcode enclosed within. It will be a string of letters and numbers similar to this: **`2451cc10-5a39-494d-b8eb-1971ecd813de`.**
-
-### Step 2. Send a revocation request
-
-Make a PATCH request to the endpoint below with your DID added to the end. The full url should like similar to `https://cas.3boxlabs.com/api/v0/auth/did/did:key:z6MkmrAdXvCBGzQVbHLNYq6y9gfFgmnYFqvmwktp3wyQFAok`.
-
-```bash
-> curl --request PATCH \
- --url https://cas.3boxlabs.com/api/v0/auth/did/yourdid \
- --header 'Content-Type: application/json' \
- --data '{"email": "youremailaddress", "otp": "youronetimepasscode"}'
-```
-
-You should see a response that says `{"email":”youremailaddress", "did": "yourdid", "status": "Revoked"}`
diff --git a/docs/composedb/guides/composedb-server/composedb-server.mdx b/docs/composedb/guides/composedb-server/composedb-server.mdx
deleted file mode 100644
index d325be97..00000000
--- a/docs/composedb/guides/composedb-server/composedb-server.mdx
+++ /dev/null
@@ -1,18 +0,0 @@
-# ComposeDB Server
-Set up and run a ComposeDB Server
-
-## Running locally
-To get started quickly, run a ComposeDB server locally on your machine.
-
-[**Running Locally →**](../../guides/composedb-server/running-locally.mdx)
-
-## Running in the cloud
-To support production usage, run a high-availability ComposeDB server in the cloud.
-
-[**Running in the Cloud →**](../../guides/composedb-server/running-in-the-cloud.mdx)
-
-## Usage Guides
-Dive into our server usage guides for more information on:
-- [Server Configurations](../../guides/composedb-server/server-configurations.mdx)
-- [Access Mainnet](../../guides/composedb-server/access-mainnet.mdx)
-- [Data Storage](../../guides/composedb-server/data-storage.mdx)
\ No newline at end of file
diff --git a/docs/composedb/guides/composedb-server/data-storage.mdx b/docs/composedb/guides/composedb-server/data-storage.mdx
deleted file mode 100644
index 8eab4d2e..00000000
--- a/docs/composedb/guides/composedb-server/data-storage.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
-# Data Storage
-Store and remove data from your node
-
-## Overview
-In production, node operators can choose what content to store on their node by pinning and unpinning models or streams. Unpinning is not synonymous with deletion.
-
-## Storage & Removal
-In order to prevent the loss of streams due to garbage collection, you need to explicitly pin the streams that you wish to persist. Pinning instructs the node to keep them around in persistent storage until they are explicitly unpinned. To view the commands for pinning & unpinning, see the [Ceramic docs](../../../protocol/js-ceramic/guides/ceramic-clients/javascript-clients/pinning) here.
-
-## Deletion
-When using or participating in the Ceramic Network you should know that the data streams created have a slightly different set of operations that can be performed on them from the standard CRUD operations you may be used to in other tech stacks.
-
-All data streams are globally readable. If you know the streamID of any and all data streams that exist on the network you, and any other app in the world, can access the data values. This is the backbone of composability. Without this globally readable trait data created on Ceramic would not be portable from app to app.
-
-There is no “delete” operation on a Ceramic data stream. By nature the blockchain is a public ledger, and as such once a Ceramic stream is anchored on-chain, it will forever exist there. Although this data may end up becoming stale over time, it is forever preserved in the state it was last anchored. Since we cannot mutate the blockchain, we cannot ever perform a full deletion of a Ceramic data stream. Take this into consideration when deciding what types of data you plan to store on the Ceramic network.
\ No newline at end of file
diff --git a/docs/composedb/guides/composedb-server/running-in-the-cloud.mdx b/docs/composedb/guides/composedb-server/running-in-the-cloud.mdx
deleted file mode 100644
index d253f00b..00000000
--- a/docs/composedb/guides/composedb-server/running-in-the-cloud.mdx
+++ /dev/null
@@ -1,350 +0,0 @@
-# Running in the Cloud
-Run a ComposeDB server in the cloud
-
-## Things to Know
-- This guide is focused on running in the cloud using Docker and Kubernetes. For local deployment instructions check out [Running Locally](../../guides/composedb-server/running-locally.mdx).
-- Interacting with ComposeDB requires running a Ceramic node as an interface for Ceramic applications, `ceramic-one` binary for data network access, and a Postgres DB. Each of these components should be running within a separate Docker container.
-- Docker images to run a Ceramic server are built from the [js-ceramic](https://github.com/ceramicnetwork/js-ceramic) repository. Images built from the `main` branch are tagged with `latest`, the git commit hash of the code from which the image was built, and the npm package version of the corresponding [`@ceramicnetwork/cli`](https://www.npmjs.com/package/@ceramicnetwork/cli) release.
-
-
-## Cloud Requirements
-**Supported Operating Systems**
-
-- Linux
-
-**Compute requirements**
-
-You’ll need sufficient compute resources to power Ceramic, IPFS, and Postgres. Below are the recommended requirements:
-
-- 4 vCPUs
-- 8GB RAM
-
-:::note
-
-If you are just getting started with a brand new project, you can start with a smaller instance and scale afterwards.
-
-:::
-
-## Running Ceramic server on Kubernetes
-
-You can run Ceramic Server on Kubernetes on the cloud, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine) or [Amazon Elastic Kubernetes Service](https://aws.amazon.com/eks/).
-You can also run Ceramic Server on [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/).
-
-Running Kubernetes on the Cloud means a provider will manage the underlying infrastructure for you. You can also run Kubernetes on your own infrastructure, but that is outside the scope of this guide.
-
-### Running Ceramic server on DigitalOcean Kubernetes
-
-DigitalOcean Kubernetes (DOKS) allows developers to deploy Kubernetes clusters using simple managed service.
-Ceramic deployment on DigitalOcean Kubernetes will require 2 tools:
-
-- [kubectl](https://kubernetes.io/docs/tasks/tools) - the Kubernetes command line tool
-- [doctl](https://docs.digitalocean.com/reference/doctl/how-to/install/) - the DigitalOcean command line tool
-
-Make sure you have these tools installed on your machine before proceeding to the next step of this guide.
-
-### Creating a Kubernetes Cluster
-First, you will have to create your DigitalOcean Kubernetes cluster. To do that, follow an [official DigitalOcean tutorial](https://docs.digitalocean.com/products/kubernetes/how-to/create-clusters/). The process of setting up your Kubernetes cluster will take about 10 minutes.
-Once it’s up and running, you are good to continue with the next step.
-
-:::note
-
-When it comes to choosing your cluster capacity, we recommend starting with the most cost-effective option - starting with the smallest cluster size and upgrading later. Also, keep in mind that
-Digital Ocean offers free credits for the new users to start building their projects.
-
-:::
-
-### Connecting to Kubernetes cluster
-
-First, you will have to configure the authentication of your cluster and retrieve the credentials. This can be achieved using the doctl command below and substituting the authentication number provided
-to you by Digital Ocean right after your cluster is launched:
-
-```bash
-doctl kubernetes cluster kubeconfig save 362dda8b-b555-4c47-9bf0-1a81cf58e0a8
-```
-
-After authenticating your cluster, it’s a good idea to verify the connectivity. This can be achieved using the following command which should list your cluster name, user and namespace:
-
-```bash
-kubectl config get-contexts
-```
-
-### Deploy a Ceramic node
-In this section we will focus on deploying the Ceramic node on the DigitalOcean Kubernetes cluster.
-
-1. Clone the [simpledeploy](https://github.com/ceramicstudio/simpledeploy.git) repository and enter the created directory:
-
-```
-git clone https://github.com/ceramicstudio/simpledeploy.git
-cd simpledeploy/k8s/base/ceramic-one
-```
-
-2. Run the following commands to deploy the stack:
-```
-# Create a namespace for the deployment
-export CERAMIC_NAMESPACE=ceramic-one-0-17-0
-kubectl create namespace ${CERAMIC_NAMESPACE}
-
-# Create the necessary secrets
-./scripts/create-secrets.sh
-
-# Apply the deployment
-kubectl apply -k .
-```
-
-3. It will take a few minutes for the deployment to pull the docker images and start the containers. You can watch the process with the following command:
-
-```bash
-kubectl get pods --watch --namespace ceramic-one-0-17-0
-```
-
-You will know that your deployment is up and running when all of the processes have a status `Running` as follows:
-
-```bash
-NAME READY STATUS RESTARTS AGE
-ceramic-one-0 1/1 Running 0 77s
-ceramic-one-1 1/1 Running 0 77s
-js-ceramic-0 1/1 Running 0 77s
-js-ceramic-1 1/1 Running 0 77s
-postgres-0 1/1 Running 0 77s
-```
-
-Hit `^C` on your keyboard to exit this view.
-
-:::note
-
-You can easily access the logs of each of the containers by using the command below and configuring the container name. For example, to access the Ceramic node logs, you can run:
-
-`kubectl logs --follow --namespace ceramic-one-0-17-0 js-ceramic-0`
-
-:::
-
-
-### Access the Ceramic node using the API
-
-You can use local port forwarding to access the Ceramic node from your local machine. Open a new terminal and run the command below. The port forward will stop when the command is exited
-so make sure to keep this command running for the rest of this guide.
-
-```bash
-kubectl port-forward --namespace ceramic-one-0-17-0 js-ceramic-0 7007:7007
-```
-Once you run the command you should see the following output in your terminal:
-
-```bash
-Forwarding from 127.0.0.1:7007 -> 7007
-Forwarding from [::1]:7007 -> 7007
-```
-
-:::note
-
-The Ceramic node must be ready to accept connections before you can access it.
-The pod's state must be `Running` and the `READY` column must be `1/1`.
-You can check the status of the node by running the command below:
-
-$ kubectl get pods --namespace ceramic-one-0-17-0 js-ceramic-0
-NAME READY STATUS RESTARTS AGE
-js-ceramic-0 1/1 Running 1 (28h ago) 28h
-
-:::
-
-To check the connection, open a new terminal and run the command below. A successful connection should utter a response `Alive!` as follows:
-
-```bash
-curl http://127.0.0.1:7007/api/v0/node/healthcheck
-```
-
-Expected output:
-```bash
-Alive!
-```
-
-### Expose the node endpoint to the internet
-
-The last step is to expose your Ceramic node to the internet so that it’s accessible for your application. This can be done using a DigitalOcean Load Balancer which comes pre-configured for using using the SimpleDeploy scripts.
-You can get the EXTERNAM IP of your `js-ceramic node` (as well as `ceramic-one`) using the following command:
-
-```bash
-kubectl get svc --namespace ceramic-one-0-17-0 js-ceramic-lb-0
-```
-
-The result of this command will be an output similar to the one below. Keep in mind that might take a few minutes for the EXTERMAL-IP to be configured and change the status from ``:
-
-```bash
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-js-ceramic-lb-1 LoadBalancer 10.245.10.130 174.138.109.159 7007:31284/TCP 4m4s
-```
-
-This external IP address can now be used for accessing your node. To test it out, copy the external IP address provided above and substitute it in the following health check command:
-
-```bash
-curl http://174.138.109.159:7007/api/v0/node/healthcheck
-```
-
-Once again, a successful connection will provide an output `Alive!`:
-```bash
-Alive!
-```
-
-### Optional - Add an SSL Cert and Domain Name
-If you wish to direct a domain to your ceramic node and acquire an SSL Certificate, you may follow the steps under [cert-ingress](https://github.com/ceramicstudio/simpledeploy/blob/main/k8s/cert-ingress/README.md) to modify the kubernetes setup. Of course you may use other methods to add a domain name and certificate depending on what provider you wish to use.
-
-### Utilize the Deployed Assets with ComposeDB CLI and Graphiql Server
-Now that you have a Ceramic server deployed, you can utilize the [ComposeDB Cli](../../set-up-your-environment.mdx#composedb) to create models and
-composites, as well as standing up a Graphiql server backed by the Ceramic with ComposeDB server.
-
-First you will need to install [ComposeDB Cli](../../set-up-your-environment.mdx#composedb). Next you will need to setup,
-your environment to properly talk to your server
-
-```bash
-export CERAMIC_URL="http://"$(kubectl get service js-ceramic-lb-0 --namespace ceramic-one-0-17-0 -o json | jq -r '.status.loadBalancer.ingress[0].ip')":7007"
-export DID_PRIVATE_KEY=$(kubectl get secrets --namespace ceramic-one-0-17-0 ceramic-admin -o json | jq -r '.data."private-key"' | base64 -d)
-```
-
-You can now follow the existing guides, adding `--ceramic-url` or `--did-private-key` to your composdb calls. For
-example:
-
-```bash
-composedb composite:from-model kjzl6hvfrbw6c5ajfmes842lu09vjxu5956e3xq0xk12gp2jcf9s90cagt2god9 --output=my-first-composite-single.json --ceramic-url=$CERAMIC_URL --did-private-key=$DID_PRIVATE_KEY
-```
-
-will create a new composite, utilizing your remote Ceramic server. You can also run Graphiql locally
-
-```bash
-composedb graphql:server --graphiql runtime-composite.json --port=5005 --did-private-key=$DID_PRIVATE_KEY
-```
-
-You can access the graphiql server at [http://localhost:5005/graphql](http://localhost:5005/graphql)
-
-## Commonly asked questions
-
-### Where is my data stored?
-
-Each part of the stack (js-ceramic, postgres) has its own [Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
-You can view the volumes with the following command:
-
-```bash
-kubectl get PersistentVolumeClaim --namespace ceramic-one-0-17-0
-```
-
-This output includes identifiers for the volume on the cloud provider as well as the size and storage class, which defines the properties of the volume.
-
-### What is my admin DID and how do I use it to connect?
-The ceramic node is configured with an admin DID. This DID is used to authenticate with the ceramic node. The DID is derived from a seed, which is stored in a kubernetes secret named ceramic-admin and the private-key key's value is the base64 encoded seed.
-
-While the example deployment creates a random seed for the admin DID, you can use your own seed by creating a secret with the same name and key instead of using the create-secrets.sh script.
-
-Example:
-```
-$ kubectl create secret generic ceramic-admin --from-literal=private-key=
-```
-To view the currently configured admin DID seed, you can use the following command (requires jq):
-```
-kubectl get secrets --namespace ceramic-one-0-17-0 ceramic-admin -o json | jq -r '.data."private-key"' | base64 -d
-```
-
-### How do I connect to the Postgres database?
-
-You can create a session to the postgres database with the following command:
-
-```bash
-kubectl exec --namespace ceramic-one-0-17-0 -ti postgres-0 -- psql -U ceramic
-```
-
-A `postgres` service is also created and can be exposed locally with port-forwarding:
-
-```bash
-kubectl port-forward --namespace ceramic-one-0-17-0 svc/postgres 5432
-```
-
-The `ceramic` user password randomly generated during deployment.
-It is also available in the `postgres-auth` secret:
-
-```bash
-kubectl --namespace ceramic-one-0-17-0 get secrets postgres-auth -o yaml
-```
-
-Here you should get the following output:
-
-```bash
-apiVersion: v1
-data:
- password: NzNjNzQ4ZDkxM2Y5NGQ2MmQwOTRiYzQ2YzIzMmM4YzdlYzFhODA2MA==
- username: Y2VyYW1pYw==
-kind: Secret
-...
-```
-
-### How do I shut it all down?
-
-To remove the workload from the cluster, you can delete the namespace. For example:
-
-```bash
-kubectl delete namespace ceramic-one-0-17-0
-```
-
-
-## Docker Hub
-You can find the ComposeDB server and IPFS Docker images on [Docker Hub](https://hub.docker.com/u/ceramicnetwork).
-Below, you can find examples of how you can run IPFS, Postgres and Ceramic processes using Docker.
-
-
-### Running Postgres
-An example below demonstrates how you can run a Postgres process. Make sure to update the variables to fit your use case:
-
-```bash
-docker pull postgres
-
-docker run -d \
- -e POSTGRES_PASSWORD=mysecretpassword \
- -e PGDATA=/var/lib/postgresql/data/pgdata \
- -v /path_on_volume_for_postgres_data:/var/lib/postgresql/data \
- -p 5432:5432 \
- --name postgres \
- postgres
-```
-
-You can also follow the examples from the official Postgres Docker image [documentation](https://hub.docker.com/_/postgres).
-
-### Running Ceramic
-An example below demonstrates how you can run a Ceramic server using Docker. Make sure to update the variables to fit your use case:
-
-```bash
-docker pull ceramicnetwork/js-ceramic:latest
-
-docker run -d \
- -p 7007:7007 \
- -v /path_on_volume_for_daemon_config:/root/.ceramic/daemon.config.json \
- -v /path_on_volume_for_ceramic_logs:/root/.ceramic/logs \
- -v /path_on_volume_for_ceramic_statestore:/root/.ceramic/statestore \
- -e NODE_ENV=production \
- -e CERAMIC_INDEXING_DB_URI=postgres://username:password@host:5432/dbname \
- --name ceramic \
- js-ceramic --ipfs-api http://ipfs_ip_address:5101
-```
-
-### Editing the `daemon.config.json` file
-
-To have the settings persist in your Ceramic node, edit the `daemon.config.json` file to include the configurations. The default location is `~/.ceramic/daemon.config.json`. For a full file example, see the [Ceramic](./server-configurations#default-configurations) docs.
-
-```bash
-...
- "ipfs": {
- "mode": "remote",
- "host": "http://ipfs_ip_address:5101"
- },
-...
-```
-
-```bash
-...
-"indexing": {
- "db": "postgres://username:password@host:5432/dbname",
- "allow-queries-before-historical-sync": true,
- "enable-historical-sync": false
- }
-...
-```
-
-## Next Steps
-
-- Understand the different ways to [configure your server](../../guides/composedb-server/server-configurations.mdx), including choosing a network
-- Use your Admin DID to authenticate your node to gain [access to mainnet](../../guides/composedb-server/access-mainnet.mdx)
diff --git a/docs/composedb/guides/composedb-server/running-locally.mdx b/docs/composedb/guides/composedb-server/running-locally.mdx
deleted file mode 100644
index 3ee8d69d..00000000
--- a/docs/composedb/guides/composedb-server/running-locally.mdx
+++ /dev/null
@@ -1,125 +0,0 @@
-# Running Locally
-Run a ComposeDB server on your local server, e.g. your laptop.
-
-## Things to Know
-- ComposeDB requires running a Ceramic node for decentralized data and a SQL instance for your index database.
-- ComposeDB requires a running `ceramic-one` node which is responsible for storing the data and coordinating with network participants.
- Make sure to configure and run the `ceramic-one` node first. You can find the steps of how to install and start the `ceramic-one` instance [here](../../set-up-your-environment#2-installation).
-- ComposeDB server can also be run locally [using Docker](../../guides/composedb-server/running-in-the-cloud.mdx).
-
-:::tip
-
-If you want to serve a live application in production, see [Running in the Cloud](../../guides/composedb-server/running-in-the-cloud.mdx).
-
-:::
-
-
-## Using Wheel
-
-The easiest way to to run ComposeDB server on your local machine is using [Wheel](https://github.com/ceramicstudio/wheel.git).
-
-### Requirements
-
-- Node.js
-- jq
-- PostgreSQL (optional dependent on the network)
-- [ceramic-one](../../set-up-your-environment.mdx#2-installation) node up and running
-
-Head to [Setup Your Environment](../../set-up-your-environment.mdx#install-the-dependencies) section for more detailed dependency installation instructions.
-
-
-**Supported Operating Systems**
-- Linux
-- Mac
-- Windows (only WSL2)
-
-
-### Setup
-
-First, install and run the `ceramic-one` binary:
-```bash
-brew install ceramicnetwork/tap/ceramic-one
-```
-```bash
-ceramic-one daemon
-```
-
-Next, download the Wheel:
-
-```bash
-curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/ceramicstudio/wheel/main/wheel.sh | bash
-```
-
-Once downloaded, start the Wheel and follow the setup prompt:
-
-```bash
-./wheel
-```
-
-When following the prompt, make sure to accept the `Include Caramic?` and `Include ComposeDB?` option to start a
-local Ceramic node enabled with CompoeDB.
-
-For detailed prompt reference and advanced Ceramic configurations head to [Wheel Reference](../../../wheel/wheel-reference.mdx).
-
-
-
-
-## Using npm
-Alternative way to run ComposeDB Server locally is using npm. This option includes more manual configurations.
-
-
-### Requirements
-**Runtime**
-- Node.js Version 16
-
-**Package Manager**
-- npm Version 6
-
-**Supported Operating Systems**
-- Linux
-- Mac
-- Windows
-
-:::note
-
-For Windows, Windows Subsystem for Linux 2 (WSL2) is strongly recommended. Using the Windows command line is not portable and can cause compatibility issue when running the same configuration on a different operating system (e.g. in a Linux-based cloud deployment).
-
-:::
-
-### Installation
-
-Install and run the `ceramic-one` binary:
-```bash
-brew install ceramicnetwork/tap/ceramic-one
-```
-```bash
-ceramic-one daemon
-```
-
-Install the Ceramic CLI and ComposeDB CLI using npm:
-
-```bash
-npm install -g @ceramicnetwork/cli @composedb/cli
-```
-
-### Basic Setup
-### Admin account
-If you don’t already have one, you’ll need to create an admin account (DID) to handle restricted changes and admin operations on your node. To do so, follow the steps in [Set up your environment](../../set-up-your-environment.mdx#developer-account) to generate a key & account. Once you’ve added your admin DID to the config file, return here.
-
-#### Start the daemon
-Using a command line utility or terminal, start the Ceramic daemon:
-
-```bash
-ceramic daemon
-```
-
-### Configurations
-You now have a server running with the default configuration and a preconfigured IPFS node that can be used by the [ComposeDB Client](../../guides/composedb-client/composedb-client.mdx).
-
-## Next Steps
-Edit your [Server Configurations](../../guides/composedb-server/server-configurations.mdx) for your use case.
-
-## Related Guides
-Check out our other guides for running a node:
-
-- [Running in the Cloud](../../guides/composedb-server/running-in-the-cloud.mdx)
\ No newline at end of file
diff --git a/docs/composedb/guides/composedb-server/server-configurations.mdx b/docs/composedb/guides/composedb-server/server-configurations.mdx
deleted file mode 100644
index a51310ae..00000000
--- a/docs/composedb/guides/composedb-server/server-configurations.mdx
+++ /dev/null
@@ -1,242 +0,0 @@
-import Tabs from '@theme/Tabs'
-import TabItem from '@theme/TabItem'
-
-# Server Configurations
-Manage the configurations for your ComposeDB server.
-
-## Default configurations
-When you start the daemon using the `ceramic daemon` command, if a configuration file is not present in the expected path `$HOME/.ceramic/daemon.config.json`, the command will create a new `daemon.config.json` file with the following defaults:
-
-```json
-{
- "anchor": {
- "ethereum-rpc-url": "https://eg_infura_endpoint" // Replace with an Ethereum RPC endpoint to avoid rate limiting
- },
- "http-api": {
- "cors-allowed-origins": [
- ".*"
- ]
- },
- "ipfs": {
- "mode": "remote",
- "host": "http://ipfs_ip_address:5101"
- },
- "logger": {
- "log-level": 2, // 0 is most verbose
- "log-to-files": true
- },
- "network": {
- "name": "mainnet", // Connect to mainnet, testnet-clay, or dev-unstable
- },
- "node": {},
- "state-store": {
- "mode": "fs", // volume storage option shared here, can be replaced by S3 mode & bucket
- "local-directory": "/path_for_ceramic_statestore", // Defaults to $HOME/.ceramic/statestore
- }
-}
-```
-
-### Key configurations
-These are the configurations you should pay close attention to, described below on this page:
-
-- Networks & Environments
-- SQL Database
-- History Sync
-- IPFS Process
-- Metrics
-
-### Changing configurations
-ComposeDB configurations can be set in two places: using the config file and using the CLI. Although we recommend making changes using the config file, for completeness this guide demonstrates both.
-
-**Using the `daemon.config.json` file (recommended)**
-
-The config file is a JSON file used to set durable, long-lived node configurations. After making changes to the config file, be sure to save your changes then restart the daemon for them to take effect.
-
-This is the preferred method for setting configs, especially for stable production usage.
-
-**Using the CLI**
-
-The CLI can be used to set temporary, short-lived node configurations. To do this, pass designated CLI flags to the daemon at startup. This method is only recommended in a scripted test environment or when starting the daemon in a singleton way for test purposes.
-
-:::tip
-
-When using the CLI, always execute the same flags each time the node restarts or else you will reset to default settings.
-
-:::
-
-## Network
-Networks are collections of nodes that communicate, store data, and share data. When running a ComposeDB server, you need to decide which network it will connect to.
-
-### Available networks
-
-Each network has its own string designation. Find more information about the networks [here](../../../protocol/js-ceramic/networking/networks).
-
-| Name | Description | Default Value |
-| --- | --- | --- |
-| mainnet | Primary public production network | |
-| testnet-clay | Primary public test network | ✅ |
-| dev-unstable | Core protocol debugging network, very experimental | |
-| local | Local instance for development | |
-
-:::info
-
-Networks are completely isolated, distinct development environments. Models and data that exist on one network *do not* exist on other networks, and are not portable.
-
-:::
-
-### Setting the network
-The system will default to `testnet-clay` if a network is not set.
-
-
-
-
-```json
-"network": {
- "name": "testnet-clay"
- }
-```
-
-
-
-
-```bash
-# Connect to testnet-clay network on startup
-
-ceramic daemon --network "testnet-clay"
-```
-
-
-
-
-### Changing networks
-To switch from one network to another, such as from `testnet-clay` to `mainnet`:
-
-
-
-
-```json
-"network": {
- "name": "mainnet"
- }
-```
-
-
-
-
-```bash
-ceramic daemon --network "mainnet"
-```
-
-
-
-
-
-:::info
-
-Be mindful that models and data are not portable across networks.
-
-If you seek to switch networks locally you need to either drop or move your default DB. To prevent data loss the preferred way is to simply move/rename the database.
-
-1. Stop your node/ceramic daemon
-2. Depending on your default database configuration execute the following commands
-
-**SQLite**: ```mv ~/.ceramic/indexing.sqlite ~/.ceramic/indexing.sqlite.NETWORK```
-
-**Postgres**:
-- ```psql postgres```
-- ```ALTER DATABASE ceramic RENAME TO ceramic_NETWORK; \q```
-
-3. Restart your ceramic daemon with the newly desired network config and compose DB will setup the new default environment automatically
-
-To switch back between networks simply follow the above steps again and return the desired backup to the default values:
-**SQLITE**: ```~./ceramic/indexing.sqlite```
-**Postgres**: Default DB Name: ```ceramic```
-
-:::
-
-
-## SQL Database
-One of the most important configurations that you must set up is your database. This database will be used to store an index of data for all models used by your app.
-
-### Available SQL databases
-
-| Name | Description | Default Value |
-| --- | --- | --- |
-| Postgres | recommended for everything besides early prototyping | |
-| SQLite | Default option; can only be run locally, recommended for early prototyping | ✅ |
-
-:::caution
-
-Only Postgres is currently supported for production usage.
-
-:::
-
-
-## IPFS Process
-### Available Configurations
-
-| Name | Description | Default value? |
-| --- | --- | --- |
-| remote | IPFS running in separate compute process; recommended for production and everything besides early prototyping | ✅ |
-
-### Persistent Storage
-To run a Ceramic node in production, it is critical to persist the [Ceramic state store](#ceramic-state-store) and the [IPFS datastore](https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#datastorespec). The form of storage you choose should also be configured for disaster recovery with data redundancy, and some form of snapshotting and/or backups.
-
-**Loss of this data can result in permanent loss of Ceramic streams and will cause your node to be in a corrupt state.**
-
-The Ceramic state store and IPFS datastore are stored on your machine's filesystem by default. The Ceramic state store defaults to `$HOME/.ceramic/statestore`. The IPFS datastore defaults to `ipfs/blocks` located wherever you run IPFS.
-
-The fastest way to ensure data persistence is by mounting a persistent volume to your instances and configuring the Ceramic and IPFS nodes to write to the mount location. The mounted volume should be configured such that the data persists if the instance shuts down.
-
-### IPFS Datastore
-
-The IPFS datastore stores the raw IPFS blocks that make up Ceramic streams. To prevent data corruption, use environment variables written to your profile file, or otherwise injected into your environment on start so that the datastore location does not change between reboots.
-
-Note: Switching between data storage locations is an advanced feature and should be avoided. Depending on the sharding implementation you may need to do a data migration first. See the [datastore spec](https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#datastorespec) for more information.
-
-### Ceramic State Store
-
-The Ceramic State Store holds state for pinned streams and the acts as a cache for the Ceramic streams that your node creates or loads. To ensure that the data you create with your Ceramic node does not get lost you must pin streams you care about and you must ensure that the state store does not get deleted.
-
-## Metrics
-Metrics are a critical part of running a production Ceramic node. They allow you to monitor the health of your node and the network, and to debug issues when they arise.
-
-Js-ceramic produces metrics in the Prometheus format. You can configure your Ceramic node to expose these metrics on an HTTP endpoint, which can then be scraped by a Prometheus server.
-Alternatively, you can configure the Ceramic node to send metrics to an Opentelemetry collector endpoint.
-
-### Prometheus endpoint
-In the `metrics` section of the daemon config, set the `prometheus` field to `true` and add a port number:
-
-```json
-"metrics": {
- "prometheus-exporter-enabled": true,
- "prometheus-exporter-port": 9464 # or whatever port you want to use
-},
-```
-
-### Opentelemetry collector endpoint
-In the `metrics` section of the daemon config, set the `metrics-exporter-enabled` field to `true` and add a collector host endpoint:
-
-```bash
-"metrics": {
- "metrics-exporter-enabled": true,
- "collector-host":
-},
-```
-
-Depending on the version of js-ceramic, environment variables may be available to be set metrics options. See the [js-ceramic docs](https://github.com/ceramicnetwork/js-ceramic/blob/develop/README.md) for more information.
-
-## Next Steps
-- [Access Mainnet](../../guides/composedb-server/access-mainnet.mdx)
diff --git a/docs/composedb/guides/data-interactions/data-interactions.mdx b/docs/composedb/guides/data-interactions/data-interactions.mdx
deleted file mode 100644
index 87eec1c8..00000000
--- a/docs/composedb/guides/data-interactions/data-interactions.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
-# Data Interactions
-Query and mutate data on ComposeDB.
-
-## Overview
-After setting up your [ComposeDB Client](../../guides/composedb-client/javascript-client.mdx), you can perform queries and mutations on ComposeDB data.
-
-## Getting Started
-
-### Queries
-
-[**Queries**](../../guides/data-interactions/queries.mdx) allow you to fetch data.
-
-### Mutations
-
-[**Mutations**](../../guides/data-interactions/mutations.mdx) allow you to create or update data.
diff --git a/docs/composedb/guides/data-interactions/mutations.mdx b/docs/composedb/guides/data-interactions/mutations.mdx
deleted file mode 100644
index 435e967d..00000000
--- a/docs/composedb/guides/data-interactions/mutations.mdx
+++ /dev/null
@@ -1,146 +0,0 @@
-import Tabs from '@theme/Tabs'
-import TabItem from '@theme/TabItem'
-
-# Mutations
-
-Create or update data on ComposeDB.
-
-## Prerequisites
-
-- An authenticated user
-- A deployed composite
-- A compiled composite
-
-:::tip
-The ComposeDB Client automatically generates a GraphQL Schema from your compiled composite.
-:::
-
-## Enable mutations
-
-Mutations require an authenticated user. After you have an authenticated user, enable mutations by setting their authenticated account on the ComposeDB client:
-
-
-
-
-
-```jsx
-// Assign the authorized did from your session to your client
-
-compose.setDID(session.did)
-```
-
-
-
-
-```jsx
-// Call setDID method on ComposeClient instance
-// Using authenticated did instance
-
-compose.setDID(did)
-```
-
-
-
-
-
-## Create data
-Let’s say your app uses a `Post` model:
-
-```jsx
-type Post @createModel(accountRelation: LIST, description: "A simple text post") {
- author: DID! @documentAccount
- title: String! @string(minLength: 10, maxLength: 100)
- text: String! @string(maxLength: 500)
-}
-```
-
-Users will generate data as they interact with your app. Your app needs to perform mutations to write that data to the network. Here’s a mutation query that creates a new post:
-
-```graphql
-# Create post
-
-mutation CreateNewPost($i: CreatePostInput!) {
- createPost(input: $i) {
- document{
- id
- title
- text
- }
- }
-}
-
-# Content for the post
-
-{
- "i": {
- "content": {
- "title": "Getting started with ComposeDB"
- "text": "A Post created using composites and GraphQL"
- }
- }
-}
-```
-
-Where:
-
-- `mutation`: GraphQL keyword for creating a write operation.
-- `CreateNewPost`: custom name given to this mutation. This name should represent what the mutation is doing and can be anything you’d like it to be.
-- `($i: CreatePostInput!)` creates a variable named `i` with the requirement that its value is of the type `CreatePostInput`. This type is automatically created for you as a part of the run-time composite. Notice the `!`, which informs us that this input is required.
-- `createPost` corresponds to an automatically generated GraphQL binding that is part of the run-time representation of your composite. Then names of these bindings follow a naming convention `create`.
-- `(input: $i)` is using the value provided for `$i` as the input for the mutation. This will be defined as a variable to this operation.
-- The final piece to this, `document{id,title,text}` is defining the fields of the document we would like this mutation to create. It’s important to note that you need to include `id` here in the mutation, but you will not need to include it in the query variables as it is automatically generated.
-- Variables: As you can see, `i` contains `content` that matches the fields in the above schema `title` and `text`. Both have the proper values supplied of a type `string`. This sets up the variables needed for the query.
-
-## Update data
-Let’s say a user wanted to modify the title of a previous post. Your app would need to perform a mutation to update that field in the post.
-
-```graphql
-# Update post
-
-mutation UpdatePost($i: UpdatePostInput!) {
- updatePost(input: $i) {
- document {
- id
- title
- text
- }
- }
-}
-
-# Content to be updated
-
-{
- "i": {
- "id":
-
-
-```bash
-composedb composite:create my-schema.graphql --output=my-composite.json --did-private-key=your-private-key
-```
-
-
-
-
-```jsx
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { DID } from 'dids'
-import { Ed25519Provider } from 'key-did-provider-ed25519'
-import { getResolver } from 'key-did-resolver'
-import { fromString } from 'uint8arrays/from-string'
-
-// Import the devtool node package
-import { createComposite, writeEncodedComposite } from '@composedb/devtools-node'
-
-// Hexadecimal-encoded private key for a DID having admin access to the target Ceramic node
-// Replace the example key here by your admin private key
-const privateKey = fromString('b0cb[...]515f', 'base16')
-
-const did = new DID({
- resolver: getResolver(),
- provider: new Ed25519Provider(privateKey),
-})
-await did.authenticate()
-
-// Replace by the URL of the Ceramic node you want to deploy the Models to
-const ceramic = new CeramicClient('http://localhost:7007')
-// An authenticated DID with admin access must be set on the Ceramic instance
-ceramic.did = did
-
-// Replace by the path to the source schema file
-const composite = await createComposite(ceramic, './source-schema.graphql')
-
-// Replace by the path to the encoded composite file
-await writeEncodedComposite(composite, './my-composite.json')
-```
-
-
-
-
-This will create a file called `my-composite.json` which contains the composite in JSON.
-
-### Deploying composites
-After creating the composite, deploy it to your local node:
-
-
-
-
-```bash
-composedb composite:deploy my-composite.json --ceramic-url=http://localhost:7007 --did-private-key=your-private-key
-```
-
-
-
-
-```jsx
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { DID } from 'dids'
-import { Ed25519Provider } from 'key-did-provider-ed25519'
-import { getResolver } from 'key-did-resolver'
-import { fromString } from 'uint8arrays/from-string'
-
-import { readEncodedComposite } from '@composedb/devtools-node'
-
-// Hexadecimal-encoded private key for a DID having admin access to the target Ceramic node
-// Replace the example key here by your admin private key
-const privateKey = fromString('b0cb[...]515f', 'base16')
-
-const did = new DID({
- resolver: getResolver(),
- provider: new Ed25519Provider(privateKey),
-})
-await did.authenticate()
-
-// Replace by the URL of the Ceramic node you want to deploy the Models to
-const ceramic = new CeramicClient('http://localhost:7007')
-// An authenticated DID with admin access must be set on the Ceramic instance
-ceramic.did = did
-
-// Replace by the path to the local encoded composite file
-const composite = await readEncodedComposite(ceramic, 'my-first-composite.json')
-
-// Notify the Ceramic node to index the models present in the composite
-await composite.startIndexingOn(ceramic)
-```
-
-
-
-
-:::tip
-This will also automatically add all models contained in the composite to the [Model Catalog](./model-catalog.mdx).
-:::
-
-
-### Compiling composites
-
-After deploying your composite, compile it so you can start perform [data interactions](../../guides/data-interactions/data-interactions.mdx) using the ComposeDB client.
-
-```bash
-composedb composite:compile my-first-composite.json runtime-composite.json
-```
-
-## Advanced
----
-### Merging composites
-
-If you have more than one composite, you need to merge them into a single composite for use in your app. This may apply when:
-
-- You want to use multiple models from the [catalog](./model-catalog.mdx#using-multiple-models)
-- You want to use a model from the catalog and one or more models you created
-- You create multiple models and store their schemas in different GraphQL files
-
-Let’s say you have two composites where `simple-profile-composite.json` contains the model for a profile model and `post-composite.json` contains the model for a post. To merge, reference both composite JSON files and specify an output file path for the merged composite.
-
-
-
-
-```bash
-composedb composite:merge simple-profile-composite.json post-composite.json --output=merged-composite.json
-```
-
-
-
-
-```jsx
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { Composite } from '@composedb/devtools'
-import { readEncodedComposite, writeEncodedComposite } from '@composedb/devtools-node'
-
-const ceramic = new CeramicClient('http://localhost:7007')
-
-const loadSources = [
- 'simple-profile-composite.json',
- 'post-composite.json',
-].map(async (path) => await readEncodedComposite(ceramic, path))
-const sourceComposites = await Promise.all(loadSources)
-const mergedComposite = Composite.from(sourceComposites)
-
-await writeEncodedComposite(mergedComposite, 'merged-composite.json')
-```
-
-:::caution
-***Note:*** To run the code above, you need a Ceramic JS HTTP client library installed on your machine to connect your app to a Ceramic node. Ceramic JS HTTP client can be installed by running the following command:
-
-`pnpm install @ceramicnetwork/http-client`
-
-:::
-
-
-
-
-The output of either example is a new file named `merged-composite.json` which contains the models of both merged composites. From here you need to [deploy](#deploying-composites) the composite to your node, then [compile](#compiling-composites) the composite to start using it.
-
-### Extracting composites
-In cases where your composite contain models not needed by your application, or in other cases where you generally want to separate models in your composite, you can extract models into a separate composite.
-
-As an example, let’s reuse the `merged-composite.json` file from the previous section and assume you want to extract the profile model into a separate composite. To do this, load the `merged-composite.json` file and specify which model(s) you’d like to extract into a new composite file.
-
-
-
-
-```bash
-composedb composite:extract-model merged-composite.json kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65 --output=new-composite.json
-```
-
-
-
-
-```jsx
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { Composite } from '@composedb/devtools'
-import { readEncodedComposite, writeEncodedComposite } from '@composedb/devtools-node'
-
-const ceramic = new CeramicClient('http://localhost:7007')
-const sourceComposite = await readEncodedComposite(ceramic, 'merged-composite.json')
-
-const mergedComposite = sourceComposite.copy(['kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65'])
-await writeEncodedComposite(mergedComposite, 'new-composite.json')
-```
-
-
-
-This will create a file called `new-composite.json` with your profile model in it. From here you need to deploy the composite to your node, then [compile](#compiling-composites) the composite to start using it.
-
-### Inspecting composites
-If you want to check what models are included in a specific composite, follow the steps below:
-
-1. Compile the composite:
-`composedb composite:compile my-first-composite.json runtime-composite.json`
-
-2. View the GraphQL schema of the composite:
-[`composedb graphql:schema runtime-composite.json --output=schema.graphql`](https://composedb.js.org/docs/0.5.x/api/commands/cli.graphql)
-
-### Aliasing composites
-In general, models are referenced using their unique model streamIDs which are not memorable. Models can be more easily referenced by aliasing them to your preferred names.
-
-To manually set aliases for your models, add the following section to your composite JSON file. In this case we will use the aliases `SimpleProfile` and `Post`.
-
-```jsx
-"aliases":{
- "kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65":"SimpleProfile",
- "kjzl6hvfrbw6c822s0cj1ug59spj648ml8a6mbqaz91wx8zx3mlwi76tfh3u1dy":"Post"
- }
-```
-
-To do aliases programmatically, use the ComposeDB Devtools library. Here’s an example script that loads a composite JSON file and assigns `SimpleProfile` and `Post`:
-
-```jsx
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { Composite } from '@composedb/devtools'
-import { readEncodedComposite, writeEncodedComposite } from '@composedb/devtools-node'
-
-const ceramic = new CeramicClient('http://localhost:7007')
-const sourceComposite = await readEncodedComposite(ceramic, 'merged-composite.json')
-
-const newComposite = sourceComposite.setAliases({
- 'kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65': 'SimpleProfile',
- 'kjzl6hvfrbw6c822s0cj1ug59spj648ml8a6mbqaz91wx8zx3mlwi76tfh3u1dy': 'Post',
-})
-await writeEncodedComposite(newComposite, 'new-composite.json')
-```
-
-This script will create a file named `new-composite.json` including model aliases:
-
-```jsx
-"aliases":{
- "kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65":"SimpleProfile",
- "kjzl6hvfrbw6c822s0cj1ug59spj648ml8a6mbqaz91wx8zx3mlwi76tfh3u1dy":"Post"
- }
-```
-
-From here you need to [deploy](#deploying-composites) the composite to your node, then [compile](#compiling-composites) the composite to start using it. When interacting with the models inside your app, you can refer to them using their human-readable aliases rather than their streamIDs.
-
-## Next Steps
----
-Set up your [**ComposeDB Client**](../../guides/composedb-client/composedb-client.mdx)
-
-## Related Guides
----
-- [Model Catalog](../../guides/data-modeling/model-catalog.mdx)
-- [Writing Models](../../guides/data-modeling/writing-models.mdx)
\ No newline at end of file
diff --git a/docs/composedb/guides/data-modeling/data-modeling.mdx b/docs/composedb/guides/data-modeling/data-modeling.mdx
deleted file mode 100644
index 0d47530f..00000000
--- a/docs/composedb/guides/data-modeling/data-modeling.mdx
+++ /dev/null
@@ -1,31 +0,0 @@
-import DocCardList from '@theme/DocCardList';
-
-# Data Modeling
-
-Learn how to model data for ComposeDB.
-
-## Overview
----
-`Models` and `composites` are the core building blocks for ComposeDB apps.
-
-### Models
-
-A `model` is the GraphQL schema for a single piece of data (e.g. social post) including its relations to other models and accounts. Models are designed to be plug-and-play so they can easily be reused by developers; when multiple apps use the same model, they share the same underlying data set. To be usable in your ComposeDB app, you need to bundle one or more models into a composite.
-
-```graphql
-# Example Model that stores a display name
-
-type DisplayName @createModel(accountRelation: SINGLE, description: "Display name for a user") {
- displayName: String! @string(minLength: 3, maxLength: 50)
-}
-```
-
-### Composites
-
-A `composite` is a group of one or more models that defines the complete graph data schema for your app. Composites are used on both the ComposeDB server and the client.
-
-
-## Getting Started
----
-
-
\ No newline at end of file
diff --git a/docs/composedb/guides/data-modeling/introduction-to-modeling.mdx b/docs/composedb/guides/data-modeling/introduction-to-modeling.mdx
deleted file mode 100644
index 88eaec38..00000000
--- a/docs/composedb/guides/data-modeling/introduction-to-modeling.mdx
+++ /dev/null
@@ -1,66 +0,0 @@
-# Introduction to Modeling
-Learn the basics of creating a new data model.
-
-## Setup
----
-Create a new `.graphql` file in your project directory to store your model(s).
-
-For example, let’s create a file called `my-schema.graphql`. Inside, we’re going to create a model to store a very simple user profile:
-
-```graphql
-type SimpleProfile @createModel(accountRelation: SINGLE, description: "Very basic profile") {
- displayName: String! @string(minLength: 3, maxLength: 50)
-}
-```
-
-## Metadata
----
-Let’s look into the metadata properties of your new model:
-
-```graphql
-type SimpleProfile @createModel(accountRelation: SINGLE, description: "Very basic profile")
-```
-
-Where:
-
-- `type` defines the name for your model, in our case `SimpleProfile`
-- `@createModel` is a directive that specifies we are creating a new model
-- `accountRelation` defines the allowable number of instances per account, where `SINGLE` limits one instance per account, and `LIST` allows unlimited instances per account
-- `description` is a string that describes the model
-
-:::tip
-
-Model names and descriptions are used in the [Model Catalog](./model-catalog.mdx). Aim for short and descriptive to improve discovery and reuse.
-
-:::
-
-## Schema
----
-Model schemas are written using the GraphQL Schema Definition Language (SDL). Let’s look at the schema of our new model. It’s a shape that only defines a single field (key) and scalar (value):
-
-```graphql
-{
- displayName: String! @string(minLength: 3, maxLength: 50)
-}
-```
-
-Where:
-
-- `displayName` is a field
-- `String!` is a scalar that defines `displayName` is a required (`!`) string
-- `@string` is a directive that sets validation rules for the scalar, in our case min and max length
-
-:::tip
-
-This is a very basic schema. Your schemas can contain more than one field and include various relations. See [Schemas](./schemas.mdx) next.
-
-:::
-
-## Next Steps
----
-To use your new model in your application, you will need to create a [**Composite →**](./composites.mdx)
-
-## Related Guides
----
-- Dive deeper into GraphQL [**Schemas**](./schemas.mdx)
-- Learn how to add [**Relations**](./relations.mdx) to your models
\ No newline at end of file
diff --git a/docs/composedb/guides/data-modeling/model-catalog.mdx b/docs/composedb/guides/data-modeling/model-catalog.mdx
deleted file mode 100644
index c30a2ea0..00000000
--- a/docs/composedb/guides/data-modeling/model-catalog.mdx
+++ /dev/null
@@ -1,61 +0,0 @@
-# Model Catalog
-Discover, share, and reuse data models.
-
-## Overview
----
-The catalog is a free and open source repository of all data models created by the ComposeDB developer community. The catalog aims to make it as easy as possible for developers to discover, share, and reuse each others models and underlying data.
-
-
-
-### Use Cases
-
-- Discover high-quality models for your app
-- Share and distribute your models to others
-
-## Adding models to the catalog
----
-Models in all deployed composites are automatically available on the catalog.
-
-How it works:
-
-1. A developer deploys a [composite](./composites.mdx) containing models to testnet or mainnet
-2. An indexer builds a catalog of all deployed models and exposes it via API
-3. The catalog is automatically available on various interfaces, including ComposeDB CLI
-
-## Using models from the catalog
----
-### Prerequisites
-
-You need a running instance of ComposeDB server and CLI to use the catalog. See [set up your environment](../../set-up-your-environment.mdx) to get started.
-
-### List all models
-
-Using ComposeDB CLI, run the following command to list all models:
-
-```sh
-composedb model:list --table
-```
-
-You will see a table where each model has the following metadata properties:
-
-- `Name` - name of the model
-- `ID` - unique identifier (streamID) of the model
-- `Description` - description of the model
-
-### Using a single model
-Fetch a single model from the catalog and convert it into a composite, using its model ID:
-
-```sh
-composedb composite:from-model kjzl6hvfrbw6c5i55ks5m4hhyuh0jylw4g7x0asndu97i7luts4dfzvm35oev65 --output=my-composite.json
-```
-
-### Using multiple models
-Run the `composite:from-model` command depicted above for each model you want to use in your application. Remember to change the composite file name to avoid collisions. After you have multiple composite files, merge them. See Merging Composites.
-
-## Next Steps
----
-To use your newly created composite in your app, you will need to deploy and compile your composite.
-
-## Related Guides
----
-Can’t find what you’re looking for in the catalog? See [Writing Models](./writing-models.mdx) to learn how to write your own models.
\ No newline at end of file
diff --git a/docs/composedb/guides/data-modeling/relations-combine-items.mdx b/docs/composedb/guides/data-modeling/relations-combine-items.mdx
deleted file mode 100644
index c89c82dc..00000000
--- a/docs/composedb/guides/data-modeling/relations-combine-items.mdx
+++ /dev/null
@@ -1,168 +0,0 @@
-# Example: Container of Items
-
-## Creating the Models
-First, create the SDL for the first model to be combined
-
-```graphql
-type Ball @createModel(accountRelation: LIST, description: "A ball to display") {
- creator: DID! @accountReference
- red: Int
- green: Int
- blue: Int
- radius: Float
-}
-```
-You will then save this to a file, such as `ball.graphql`. You can then add the model and get the id.
-
- composedb composite:create --output ball.json ball.graphql
- cat ball.json | jq '.models | keys_unsorted[0]'
-
-Now we need a second model that will *combine* with the first model
-
-```graphql
-type Obstacle @createModel(accountRelation: LIST, description: "An obstacle a ball can collide with") {
- creator: DID! @accountReference
- x: Int
- y: Int
- z: Int
- length: Int
- width: Int
- height: Int
-}
-```
-
-Next, we're going to combine the existing models into a new model
-
-```graphql
-type Ball @loadModel(id: "") {
- id: ID!
-}
-
-type Obstacle @loadModel(id: ""){
- id: ID!
-}
-
-type Collision @createModel(accountRelation: LIST, description: "Collision between ball and object") {
- ballID: StreamID! @documentReference(model: "Ball")
- ball: Ball! @relationDocument(property: "ballID")
- obstacleID: StreamID! @documentReference(model: "Obstacle")
- obstacle: Ball! @relationDocument(property: "ballID")
- x: Int
- y: Int
- z: Int
-}
-```
-Save this to a file and add as above.
-
-We can now merge all of these and deploy them as a composite.
-
- composedb composite:merge ball.json obstacle.json collision.json --output=merged.json
- composedb composite:deploy merged.json
- composedb composite:compile merged.json runtime.json
-
-Our composite is now ready to use. We can use it with graphiql
-
- composedb graphql:server --graphiql runtime.json
-
-## Inserting Data
-
-We can create an item with mutation
-
-```graphql
-mutation CreateNewBall($i: CreateBallInput!){
- createBall(input: $i){
- document {
- id
- radius
- }
- }
-}
-```
-
-and variables
-
-```graphql
-{
- "i": {
- "content": {
- "creator": "",
- "radius": 45,
- "red": 10,
- "green": 20,
- "blue": 30
- }
- }
-}
-```
-
-We can create a second object with a mutation
-
-```graphql
-mutation CreateNewObstacle($i: CreateObstacleInput!){
- createObstacle(input: $i){
- document {
- id
- }
- }
-}
-```
-
-and variables
-
-```graphql
-{
- "i": {
- "content": {
- "creator": ""
- "x": 1
- "y": 2
- "z": 3
- "length": 4
- "width": 5
- "height": 6
- }
- }
-}
-```
-
-Finally we can define the resultant object from combining items
-```graphql
-mutation CreateCollision($i: CreateCollisionInput!){
- createCollision(input: $i){
- document {
- id
- }
- }
-}
-```
-and variables
-```graphql
-{
- "i": {
- "content": {
- "ballID": "",
- "obstacleID": ""
- }
- }
-}
-```
-## Query The Data
-We can query for the combined item
-```graphql
-query {
- collisionIndex(first:5) {
- edges {
- node {
- id
- ball {
- id
- radius
- }
- obstacle {
- id
- }
- }
- }
- }
-}
-```
\ No newline at end of file
diff --git a/docs/composedb/guides/data-modeling/relations-container-of-items.mdx b/docs/composedb/guides/data-modeling/relations-container-of-items.mdx
deleted file mode 100644
index aea4f2b4..00000000
--- a/docs/composedb/guides/data-modeling/relations-container-of-items.mdx
+++ /dev/null
@@ -1,165 +0,0 @@
-# Example: Container of Items
-
-## Creating the Models
-First, create the SDL for your item
-
-```graphql
-type Ball @createModel(accountRelation: LIST, description: "A ball to display") {
- creator: DID! @accountReference
- red: Int
- green: Int
- blue: Int
- radius: Float
-}
-```
-You will then save this to a file, such as `ball.graphql`. You can then add the model and get the id.
-
- composedb composite:create --output ball.json ball.graphql
- cat ball.json | jq '.models | keys_unsorted[0]'
-
-Next, create the SDL for your container, without references to items
-```graphql
-type World @createModel(accountRelation: LIST, description: "Ball World") {
- name: String! @string(minLength: 3, maxLength: 50)
-}
-```
-Save this to a file and add as above. Then we will create a model to relate our item and container
-```graphql
-type Ball @loadModel(id: "") {
- id: ID!
-}
-
-type World @loadModel(id: ""){
- id: ID!
-}
-
-type BallRelation @createModel(accountRelation: LIST, description: "Relate a ball to our world") {
- ballID: StreamID! @documentReference(model: "Ball")
- ball: Ball! @relationDocument(property: "ballID")
- worldID: StreamID! @documentReference(model: "World")
- world: World! @relationDocument(property: "worldID")
-}
-```
-For the relation, the ID will likely be the last model id. Finally, relate our container to our items
-```graphql
-type BallRelation @loadModel(id: "") {
- id: ID!
-}
-
-type World @loadModel(id: "") {
- balls: [BallRelation] @relationFrom(model: "BallRelation", property: "worldID")
-}
-```
-
-:::caution
-This is a view on top of the models, so you cannot require your items, such as with `balls: [BallRelation!]`
-:::
-
-We can now merge all of these and deploy them as a composite.
-
- composedb composite:merge ball.json world.json ball_relation.json world_relation.json --output=merged.json
- composedb composite:deploy merged.json
- composedb composite:compile merged.json runtime.json
-
-Our composite is now ready to use. We can use it with graphiql
-
- composedb graphql:server --graphiql runtime.json
-
-## Inserting Data
-
-We can create an item with mutation
-
-```graphql
-mutation CreateNewBall($i: CreateBallInput!){
- createBall(input: $i){
- document {
- id
- radius
- }
- }
-}
-```
-
-and variables
-
-```graphql
-{
- "i": {
- "content": {
- "creator": "",
- "radius": 45,
- "red": 10,
- "green": 20,
- "blue": 30
- }
- }
-}
-```
-
-We can create a container with mutation
-
-```graphql
-mutation CreateNewWorld($i: CreateWorldInput!){
- createWorld(input: $i){
- document {
- id
- }
- }
-}
-```
-
-and variables
-
-```graphql
-{
- "i": {
- "content": {
- "name": "test-world",
- }
- }
-}
-```
-
-Finally we can define relations between items and the container with mutation
-```graphql
-mutation CreateBallRelation($i: CreateBallRelationInput!){
- createBallRelation(input: $i){
- document {
- id
- }
- }
-}
-```
-and variables
-```graphql
-{
- "i": {
- "content": {
- "ballID": "",
- "worldID": ""
- }
- }
-}
-```
-## Query The Data
-We can query for the container for the items, and from that find the items.
-```graphql
-query {
- worldIndex(first: 1) {
- edges {
- node {
- id
- name
- balls(first: 5) {
- edges {
- node {
- id
- ballID
- }
- }
- }
- }
- }
- }
-}
-```
\ No newline at end of file
diff --git a/docs/composedb/guides/data-modeling/relations.mdx b/docs/composedb/guides/data-modeling/relations.mdx
deleted file mode 100644
index a5cf43ad..00000000
--- a/docs/composedb/guides/data-modeling/relations.mdx
+++ /dev/null
@@ -1,198 +0,0 @@
-# Relations
-
-Define queryable relationships between models and other models or accounts.
-
-## Types of Relations
-
----
-
-There are a few primary forms of relations currently supported by ComposeDB:
-
-- [Account to model relations](#account-to-model)
-- [Model to account relations](#model-to-account)
-- [Model to model relations](#model-to-model)
-- [Account to account relations](#account-to-account)
-
-## Account to Model
-
----
-
-Account to model relations enable linking and querying data to the account that created it. By default the `@createmodel` directive (used when creating a new model) requires that every model must specify a relation to its author’s account. This was covered in [Writing Models](../../guides/data-modeling/writing-models.mdx).
-
-### Example: Simple Profile
-
-Here’s a model for a very simple user profile that can be queried based on the author:
-
-```graphql
-# Define simple profile model
-# Relate it to the author's account
-# Limit to one profile per account
-# Enable queries based on author
-
-type SimpleProfile @createModel(accountRelation: SINGLE, description: "Very basic profile") {
- displayName: String! @string(minLength: 3, maxLength: 50)
-}
-```
-
-Where:
-
-- `accountRelation` relates the profile to the author’s account
-- `SINGLE` limits to one profile per account
-
-## Model to Account
-
----
-
-Model to account relations enable you to link data to and query data from an account other than the data’s author. When using this type of relation, you need to define a model field that stores an account (e.g. a [DID](../../guides/composedb-client/user-sessions.mdx)), then add the `@accountReference` directive to make it queryable.
-
-### Example: Direct message (DM)
-
-Here’s a model for a user-to-user message that can be queried based on the recipient:
-
-```graphql
-# Define message model
-# Relate it to author's account
-# Allow unlimited sent messages
-# Store reference to recipient's account
-# Enable queries based on recipient
-
-type Message @createModel(accountRelation: LIST, description: "Direct message model") {
- recipient: DID! @accountReference
- directMessage: String! @string(minLength: 1, maxLength: 200)
-}
-```
-
-Where:
-
-- `accountRelation` relates the message to the author’s account
-- `LIST` allows unlimited messages
-- `recipient` references the recipient’s account by storing its `DID!`, using `@accountReference`
-
-## Model to Model
-
----
-
-Model to model relations enable you to link data to and query data from another piece of data. These relations can be uni-directional (e.g. query a post from a comment) or bi-directional (e.g. query a post from a comment and query all comments from a post).
-
-There is type of model-to-model relation that includes the user as part of the relationship. This is achieved by using the `SET` account relation type, which allows users to enforce a constraint where each user account (or DID) can create only one instance of a model for a specific record of another model.
-
-### Example: Post with comments and likes
-
-Here’s a model that allows many comments from the same or different account to be made on a single post. It supports unlimited comments per user, and bi-directional queries from any comment or like to the original post and from the original post to all of its comments and likes. The model schema also creates a relation between posts and likes enabling a single like per post by an account, meaning a single account will only be able to like the post once
-
-```graphql
-# Load post model (using streamID)
-
-type Post @loadModel(id: "kjzl6hvfrbw6c822s0cj1ug59spj648ml8a6mbqaz91wx8zx3mlwi76tfh3u1dy") {
- id: ID!
-}
-
-# New comment model
-# Set reference to original post
-# Enable querying comment to get original post
-
-type Comment @createModel(accountRelation: LIST, description: "A comment on a Post") {
- postID: StreamID! @documentReference(model: "Post")
- post: Post! @relationDocument(property: "postID")
- text: String! @string(maxLength: 500)
-}
-
-# New like model
-# Set relationship to original post
-# Enable querying comment to get original post
-type Like @createModel(description: "A like on a post", accountRelation: SET, accountRelationFields: ["postID"]) {
- postID: StreamID! @documentReference(model: "Post")
- post: Post! @relationDocument(property: "postID")
-}
-```
-
-Relations can also be created between models loaded from known streamIDs
-
-```graphql
-# Load comment model
-
-type Comment @loadModel(id: "kjzl6hvfrbw6c9oo2ync09y6z5c9mas9u49lfzcowepuzxmcn3pzztvzd0c7gh0") {
- id: ID!
-}
-
-# Load post model
-# Extend post model with comments and likes
-# Set relationships to all comments and likes
-# Enable querying post to get all comments and likes
-
-type Post @loadModel(id: "kjzl6hvfrbw6c822s0cj1ug59spj648ml8a6mbqaz91wx8zx3mlwi76tfh3u1dy") {
- comments: [Comment] @relationFrom(model: "Comment", property: "postID")
- likes: [Like] @relationFrom(model: "Like", property: "postID")
-}
-```
-
-Where:
-
-- `id` is a simple placeholder, since empty types are not allowed
-- `postID` defines the relationship from a comment to the original post, using `@documentReference`
-- `post` allows accessing the original post from the comment, using `@relationDocument`
-- `text` defines a string for the comment
-- `comments` defines the relationships from a post to a collection of comments, using `@relationFrom`; requires specifying the model relation (`Comment`) and the specific property that stores the relation (`postID`)
-- `likes` defines the relationships from a post to a collection of comments, using `@relationFrom`; requires specifying the model relation (`Like`) and the specific property that stores the relation (`postID`)
-
-### Using interfaces
-
-When defining relations, it is possible to reference model interfaces to allow for a wider range of documents in the relations set, for example to create a collection of documents using different models implementing the same interface:
-
-```graphql
-interface TextContent @createModel(description: "Required text content interface") {
- text: String! @string(maxLength: 10000)
-}
-
-type Page implements TextContent @createModel(description: "Page model") {
- title: String @string(maxLength: 100)
- text: String! @string(maxLength: 10000)
-}
-
-type Post implements TextContent @createModel(description: "Post model") {
- title: String! @string(maxLength: 100)
- text: String! @string(maxLength: 10000)
- createdAt: DateTime!
-}
-
-type ContentCollectionItem
- @createModel(description: "Association between a collection and an item") {
- # The Node interface is used here instead of ContentCollection, see warning below
- collectionID: StreamID! @documentReference(model: "Node")
- collection: Node! @relationDocument(property: "collectionID")
- itemID: StreamID! @documentReference(model: "TextContent")
- item: TextContent! @relationDocument(property: "itemID")
-}
-
-type ContentCollection @createModel(description: "Collection of text contents") {
- name: String @string(maxLength: 50)
- items: [ContentCollectionItem]!
- @relationFrom(model: "ContentCollectionItem", property: "collectionID")
-}
-```
-
-:::caution Circular references
-
-ComposeDB does not support creating relations with circular references, such as `ContentCollection` -> `ContentCollectionItem` -> `ContentCollection` in the example above.
-
-To work around this limitation, it is possible to use the `Node` interface as a placeholder for any model. The example above uses the `Node` interface instead of `ContentCollection` to reference the collection in the `ContentCollectionItem` in order to avoid creating a circular reference.
-
-:::
-
-## Account to Account
-
----
-
-:::caution
-
-Account to account relations are on the roadmap, but not yet supported.
-
-:::
-
-Account to account relations enable you to define a relationship between an account and a different account, and query both ways based on that relationship. This is useful for creating structures such as social graphs where the relationship represents a follow.
-
-## Next Steps
-
----
-
-Now that you understand the fundamentals of creating models with different types of relations, let's create a [**composite**](../data-modeling/composites.mdx) so we can use it in our app.
diff --git a/docs/composedb/guides/data-modeling/schemas.mdx b/docs/composedb/guides/data-modeling/schemas.mdx
deleted file mode 100644
index db3d002e..00000000
--- a/docs/composedb/guides/data-modeling/schemas.mdx
+++ /dev/null
@@ -1,167 +0,0 @@
-# Schemas
-
-Learn how to write high-quality GraphQL schemas for your models.
-
-## Overview
-
-ComposeDB models are written in GraphQL using GraphQL Schema Definition Language [(SDL)](https://graphql.org/learn/schema/). Your schema defines a collection of object types and the relationships between them. Those types will have scalars (values), shapes (key-value mappings), and lists to describe the structure and validation rules for the model, and use directives for other metadata information.
-
-We currently support a subset of SDL’s scalars and directives, but are continually adding more, see the API [reference](https://composedb.js.org/docs/0.5.x/api/sdl/scalars) for a complete list.
-
-## Concepts
-
-Learn about key concepts for the GraphQL Schema Definition Language.
-
-:::note
-
-On this page, we provide basic info for you to begin writing GraphQL data models. For more complete information on the GraphQL Schema Definition Language, visit the [GraphQL website](https://graphql.org/learn/schema/).
-
-:::
-
-### Shapes, Fields, Scalars
-
-The most basic component in a GraphQL schema is an object type, sometimes called a shape. It simply represents the shape of the data you want to query and its properties, consisting of fields (keys) and scalars (values).
-
-```graphql
-type EducationModule {
- module_name: String!
- module_domain: String
- number_of_topics_covered: Int!
- learners_enrolled: [Learner!]!
-}
-```
-
-Where:
-
-- `type` defines a new object
-- `EducationModule` the name given to the object; queryable
-- `module_name`, `module_domain`, `number_of_topics_covered` and `learners_enrolled` are fields in the `EducationModule` type; all fields are queryable
-- `String!` and `Int!` define the data type of the value. By adding `!` to the end of the type declaration, we are telling GraphQL to always return a value when we query this field, which also means that when writing data through a mutation a value is required.
-- `[Learner!]!` defines the data type of the value, but in this case the data type is an array of another type, `Learner`, which is not depicted above. It is required since it includes the `!`.
-
-### Enums
-
-Enums represent the type of a single string value in the schema from a set of
-accepted values, for example:
-
-```graphql
-enum NoteStatus {
- DEFAULT
- IMPORTANT
- ARCHIVE
-}
-```
-
-### Special Types
-
-GraphQL reserves the use of two special type names, `query` and `mutation`.
-
-_Do not_ name any of your own custom types, which are the majority of the types you will work with, the same as these two special types.
-
-- `query` type is used as the entry point when retrieving data using GraphQL
-- `mutation` type is used as the entry point when writing or changing data using GraphQL
-
-### Embedded Shapes
-
-Our first shape, `EducationModule`, makes use of an embedded shape called `Learner`:
-
-```graphql
-type EducationModule {
- module_name: String!
- module_domain: String
- number_of_topics_covered: Int!
- learners_enrolled: [Learner!]!
-}
-
-type Learner {
- first_name: String!
- last_name: String!
- username: String!
-}
-```
-
-`Learner` is not anything different from `EducationModule` in terms of how it is defined. It does contain different fields, but it is just a GraphQL shape that can be used like any other shape.
-
-💡 For this to work, you will want to define both shapes inside the same GraphQL file when writing ComposeDB schemas.
-
-### Interfaces
-
-Interfaces are abstract models defining common fields for other models. Objects can implement these interfaces to ensure they match their constraints and provide [additional relations options](./relations.mdx#using-interfaces), for example:
-
-```graphql
-interface TextContent @createModel(description: "Required text content interface") {
- text: String! @string(maxLength: 10000)
-}
-
-type Page implements TextContent @createModel(description: "Page model") {
- title: String @string(maxLength: 100)
- text: String! @string(maxLength: 10000)
-}
-
-type Post implements TextContent @createModel(description: "Post model") {
- title: String! @string(maxLength: 100)
- text: String! @string(maxLength: 10000)
- createdAt: DateTime!
-}
-```
-
-### Directives
-
-ComposeDB comes with [a list of different directives](https://composedb.js.org/docs/0.7.x/api/sdl/directives) that can be used to create or load data models, define type validation rules, and create indices
-for specific fields which enables them to be used for document filtering and sorting.
-
-#### Type validation directives
-
-Directives are keywords that add validation rules to a scalar. Not all scalars need to have directives, though Strings are required to have a maxLength. Let’s add directives to the two shapes used in this guide:
-
-```graphql
-type EducationModule {
- module_name: String! @string(maxLength: 50)
- module_domain: String @string(minLength: 5, maxLength: 50)
- number_of_topics_covered: Int! @int(min: 1, max: 100)
- learners_enrolled: [Learner!]! @list(maxLength: 30)
-}
-
-type Learner {
- first_name: String! @string(minLength: 10, maxLength: 30)
- last_name: String! @string(maxLength: 30)
- username: String! @string(maxLength: 32) @immutable
-}
-```
-
-Where:
-
-- Each directive is declared using the `@` symbol
-- `@string` adds validation rules to values that are strings, e.g. minimum and maximum length
-- `@int` adds validation rules to values that are integers, e.g. minimum and maximum values
-- `@list` adds validation rules to an array, e.g. maximum length
-- `@immutable` ensures that after a field value is set it won't be updated
-
-#### Directives for creating indices
-
-To be able to filter the query results by a specific field and sort them in a specific order,
-you are required to create indices for corresponding fields. In ComposeDB indices work the
-same way as in traditional databases - they speed up the querying processes. You can create indices
-for specific fields using `@createIndex` directive as follows:
-
-```graphql
-type Posts
- @createModel(accountRelation: LIST, description: "A simple Post")
- @createIndex(fields: [{ path: "title" }])
- @createIndex(fields: [{ path: "tag" }])
- @createIndex(fields: [{ path: "created_at" }]) {
- title: String! @string(minLength: 1, maxLength: 100)
- body: String! @string(minLength: 1, maxLength: 100)
- tag: String! @string(minLength: 1, maxLength: 100)
- ranking: Int!
- created_at: DateTime!
-}
-```
-
-The example above will create indices for the fields `title`, `tag` and `created_at`, and will enable you to filter the documents based on the values in these fields as well as sort the results in a specified order.
-
-You can create indices for individual or multiple fields in your data models.
-
-## Next Steps
-
-Learn how to add [**Relations**](../data-modeling/relations.mdx) to your schema [**→**](../data-modeling/relations.mdx)
diff --git a/docs/composedb/guides/data-modeling/writing-models.mdx b/docs/composedb/guides/data-modeling/writing-models.mdx
deleted file mode 100644
index 8c82657f..00000000
--- a/docs/composedb/guides/data-modeling/writing-models.mdx
+++ /dev/null
@@ -1,19 +0,0 @@
-# Writing Models
-Create new models or extend existing models.
-
-## Introduction to Modeling
-Learn the basics of creating a new data model.
-
-**[Introduction to Modeling →](../../guides/data-modeling/introduction-to-modeling.mdx)**
-
-
-## Schemas
-Learn how to write high-quality GraphQL schemas for your models.
-
-**[Schemas →](../../guides/data-modeling/schemas.mdx)**
-
-
-## Relations
-Define queryable relationships between models and other models or accounts.
-
-**[Relations →](../../guides/data-modeling/relations.mdx)**
\ No newline at end of file
diff --git a/docs/composedb/guides/index.mdx b/docs/composedb/guides/index.mdx
deleted file mode 100644
index 68fb214c..00000000
--- a/docs/composedb/guides/index.mdx
+++ /dev/null
@@ -1,9 +0,0 @@
-# Guides
-
-[**Data Modeling**](./data-modeling/data-modeling.mdx) - Learn how to model data for ComposeDB.
-
-[**ComposeDB Client**](./composedb-client/composedb-client.mdx) - Connect your app to a ComposeDB server.
-
-[**ComposeDB Server**](./composedb-server/composedb-server.mdx) - Set up and run a ComposeDB Server.
-
-[**Data Interactions**](./data-interactions/data-interactions.mdx) - Query and mutate data on ComposeDB.
diff --git a/docs/composedb/interact-with-data.mdx b/docs/composedb/interact-with-data.mdx
deleted file mode 100644
index dd39b2ae..00000000
--- a/docs/composedb/interact-with-data.mdx
+++ /dev/null
@@ -1,325 +0,0 @@
-import Tabs from '@theme/Tabs'
-import TabItem from '@theme/TabItem'
-
-
-# Interact with data
-The final step of getting started with ComposeDB is interacting with your data using GraphQL. In this guide you will learn how to perform GraphQL queries and mutations using your composite.
-
-:::tip
-Want to interact with data using JavaScript instead? See [Client setup](./guides/composedb-client/javascript-client.mdx)
-:::
-
-## Setup
-### GraphQL Server
-To interact with data on the network, start a local GraphQL server by running the command below. Note that you have to provide the [private key](./set-up-your-environment#generate-your-private-key) to your `did-private-key` here — it is also required for performing mutations, covered below.
-
-```bash
-composedb graphql:server --ceramic-url=http://localhost:7007 --graphiql runtime-composite.json --did-private-key=your-private-key --port=5005
-```
-
->✏️ ***Note:*** You can customize the port by configuring the `—-port` flag.
-
-
-The output will display a URL, for example:
-```bash
-GraphQL server is listening on http://localhost:5005/graphql
-```
-
-### GraphQL Web UI
-In your browser, visit the URL that your local GraphQL server is listening on. You will see a simple UI that you can use to easily interact with your data.
-This UI allows you to run queries by simply writing data queries inside of the editor you see below and pressing the "Play" button to see the results of the query:
-
-
-
-## Queries
-One of the most common data interactions you might want to do with ComposeDB is read records from the graph. Using GraphQL, you can query ComposeDB records indexed by your Ceramic node.
-
-In the [Create your composite](./create-your-composite.mdx) guide, we fetched two models from the Catalog: `Post` and `SimpleProfile`. Here we will focus on `Post` model. For example, let’s say you want to check the first 2 entries that were indexed on the Post graph. This can be achieved running a query like below and specifying that you want to retrieve first 2 records:
-
-
-
-**Query**:
-
-
-
-```graphql
-query{
- postsIndex(first: 2) {
- edges {
- node {
- body
- }
- }
- }
-}
-```
-
-
-
-You should see a response similar to the one below. Here, nodes correspond to stored documents while edges represent the relations between nodes.
-
-
-
-**Response**:
-
-
-
-```json
-{
- "data": {
- "postsIndex": {
- "edges": [
- {
- "node": {
- "text": "A Post created using composites and GraphQL"
- }
- },
- {
- "node": {
- "text": "This is my second post!"
- }
- }
- ]
- }
- }
-}
-```
-
-
-
-You have options to retrieve specific records or last `n` indexed records as well. For example, to check the last 3 records, run the query below:
-
-
-
-**Query:**
-
-
-
-```graphql
-query{
- postsIndex(last: 3) {
- edges {
- node {
- body
- }
- }
- }
-}
-```
-
-
-## Mutations
-There are three types of mutations you can perform on ComposeDB data: creating, and updating records, or change wether the records is indexed or not.
-
-### Creating records
-Let’s say, you would like to create a post and add it to the graph. To do that, you will have to run a mutation as shown below and pass the actual text as a variable:
-
-
-
-**Query:**
-
-
-
-```graphql
-mutation CreateNewPost($i: CreatePostsInput!){
- createPosts(input: $i){
- document{
- id
- title
- body
- tag
- ranking
- created_at
- }
- }
-}
-```
-
-
-
-**Variables:**
-
-
-
-```json
-{
- "i": {
- "content": {
- "title": "New post",
- "body": "My new post on Ceramic",
- "tag": "User post",
- "ranking": 5,
- "created_at": "2024-12-03T10:15:30Z"
- }
- }
-}
-```
-
-
-
-The result of the query above will be a new document with a unique ID and the content you provided:
-
-
-
-**Response**:
-
-
-
-```json
-{
- "data": {
- "createPosts": {
- "document": {
- "id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q",
- "title": "New post",
- "body": "My new post on Ceramic",
- "tag": "User post",
- "ranking": 5,
- "created_at": "2024-12-03T10:15:30Z"
- }
- }
- }
-}
-```
-
-
-:::note
-
-Stream IDs are unique. The “id” you will see in the response when performing the mutation above will be different. Keep that in mind
-as you follow this guide and update the id to the one that you see in your response.
-
-:::
-
-
-### Updating records
-Now let’s say you want to edit the post you created in the previous step. To update it, you have to run the `UpdatePost` mutation and pass the post’s unique ID along with the updated content as variables.
-
-:::info
-
-You can find your post’s ID in the response after you ran the `CreateNewPost` mutation.
-
-:::
-
-
-**Query:**
-
-```graphql
-mutation UpdatePost($i: UpdatePostsInput!) {
- updatePosts(input: $i) {
- document {
- id
- title
- body
- tag
- ranking
- created_at
- }
- }
-}
-```
-
-
-
-**Variables:**
-
-```json
-{
- "i": {
- "id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q",
- "content": {
- "title": "New post",
- "body": "My new post on Ceramic using ComposeDB",
- "tag": "User post",
- "ranking": 5,
- "created_at": "2024-12-03T10:15:30Z"
- }
- }
-}
-```
-
-This mutation will update the record with ID `kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q`.
-
-**Response:**
-```json
-{
- "data": {
- "updatePosts": {
- "document": {
- "id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q",
- "title": "New post",
- "body": "My new post on Ceramic using ComposeDB",
- "tag": "User post",
- "ranking": 5,
- "created_at": "2024-12-03T10:15:30Z"
- }
- }
- }
-}
-```
-
-### Remove/Restore record from index
-If instead of updating the created post record we want to stop indexing said record we would need to use the `enableIndexingPost`
-mutation with the `shouldIndex` option set to `false`, or if we want to index an un-indexed record we can call the `enableIndexingPost`
-mutation with the `shouldIndex` option set to `true`, and the post ID as variable.
-
-
-**Query:**
-
-```graphql
-mutation EnableIndexingPost($i: EnableIndexingPostsInput!) {
- enableIndexingPosts(input: $i) {
- document {
- id
- }
- }
- }
-```
-
-
-
-**Variables:**
-
-```json
-{
- "i": {
- "id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q",
- "shouldIndex": false
- }
-}
-```
-
-This mutation will un-index the record with ID `kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q`.
-
-**Response:**
-```json
-{
- "data": {
- "enableIndexingPosts": {
- "document": null
- }
- }
-}
-```
-
-## Authentication
-Although you can query records created by other accounts, you can only perform mutations on records controlled by your account. This guide did not require your authentication because you previously did that in the [Set up your environment](./set-up-your-environment.mdx) guide.
-
-🔑 `did-private-key` plays a very important role for these kind of mutations - it ensures that only you, the account owner, can make changes to the streams that you created.
-
-## Next Steps
-Congratulations — you’re on your way to becoming a ComposeDB developer! 🔥
-
-Visit [Next Steps](./next-steps.mdx) for more integration guides and opportunities to contribute to the ComposeDB on Ceramic ecosystem.
-
-## Related Guides
-For more detailed descriptions and examples, see our advanced guides:
-
-- [Authentication for Mutations](./guides/composedb-client/user-sessions.mdx)
-
-- [Data Interactions](./guides/data-interactions/data-interactions.mdx)
-
-- [Queries](./guides/data-interactions/queries.mdx)
-
-- [Mutations](./guides/data-interactions/mutations.mdx)
-
-- [ComposeDB Client setup](./guides/composedb-client/javascript-client.mdx)
diff --git a/docs/composedb/introduction.mdx b/docs/composedb/introduction.mdx
deleted file mode 100644
index ec035ad4..00000000
--- a/docs/composedb/introduction.mdx
+++ /dev/null
@@ -1,49 +0,0 @@
-# ComposeDB Docs
-
-
-ComposeDB is a composable graph database built on [Ceramic](https://ceramic.network), designed for Web3 applications.
-
-### Use Cases
-| Use Case | Examples |
-|---|---|
-|__Decentralized identity__| `user profiles` `credentials` `reputation systems` |
-|__Web3 social__| `social graphs` `posts` `reactions` `comments` `messages` |
-|__DAO tools__| `proposals` `projects` `tasks` `votes` `contribution graphs` |
-|__Open information graphs__| `DeSci graphs` `knowledge graphs` `discourse graphs` |
-
-### Why ComposeDB?
-
-- Store and query data with powerful, easy-to-use GraphQL APIs
-- Build faster with a catalog of plug-and-play schemas
-- Bootstrap content by plugging into a composable data ecosystem
-- Deliver great UX with sign-in with Ethereum, Solana, and more
-- Eliminate trust and guarantee data verifiability
-- Scale your Web3 data infrastructure beyond L1 or L2 blockchains
-
-### Project Status: `Beta`
-
-ComposeDB officially entered `Beta` on February 28, 2023. What does this mean?
-
-- You can now build and deploy apps to production on mainnet!
-- Core features like GraphQL APIs, reusable models, and data composability are available
-- We will continue to improve performance and add more features
-- We are not yet guaranteeing a 100% stable, bug-free platform
-
-If you want to provide feedback, request new features, or report insufficient performance, please [make a post on the Forum](https://forum.ceramic.network/c/graph/9), as we'd like to work with you.
-Thank you for being a ComposeDB pioneer and understanding that great Web3 protocols take time to mature.
-
-
-
-
-### [Get Started →](./getting-started)
-Build a Hello World application and interact from the CLI.
-
-### [Development Guides →](./getting-started)
-Learn about data modeling, application set up, and data interactions.
-
-
-### [Core concepts →](./core-concepts)
-Dive deeper into the ComposeDB protocol and its components.
-
-### [Community →](../ecosystem/community.mdx)
-Connect with the ComposeDB developer community.
diff --git a/docs/composedb/next-steps.mdx b/docs/composedb/next-steps.mdx
deleted file mode 100644
index 14c5bfcf..00000000
--- a/docs/composedb/next-steps.mdx
+++ /dev/null
@@ -1,21 +0,0 @@
-import Tabs from '@theme/Tabs'
-import TabItem from '@theme/TabItem'
-
-# Next Steps
-
-After learning the foundations of ComposeDB with the [Getting Started](./getting-started.mdx) guide, you are now ready to start integrating ComposeDB into your application and run in production.
-
-## Integration Guides
-
-- Visit the [Guides](./guides/index.mdx) section to learn more about creating & interacting with data
-
-## Examples
-
-- Create Ceramic App: [Blog](https://blog.ceramic.network/launching-create-ceramic-app/?ref=the-ceramic-blog-newsletter)
-
-## Go Further
-
-- Join the [Community](../ecosystem/community.mdx)
-- Learn more about [Core Concepts](./core-concepts.mdx)
-- Level up your [Data Modeling](./guides/data-modeling/data-modeling.mdx)
-- Perform more advanced [Data interactions](./guides/data-interactions/data-interactions.mdx)
diff --git a/docs/composedb/set-up-your-environment.mdx b/docs/composedb/set-up-your-environment.mdx
deleted file mode 100644
index 7ce7c066..00000000
--- a/docs/composedb/set-up-your-environment.mdx
+++ /dev/null
@@ -1,760 +0,0 @@
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-
-# Quickstart
-
-The first step to build with ComposeDB is setting up your development environment. This Quickstart guide will walk you through the process of setting up your local development environment from scratch.
-
-By the end of this guide you'll have a good understanding of how to get started building with ComposeDB.
-
-## 1. Prerequisites
-
-- Operating system: **Linux, Mac, or Windows** (only [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install))
-- **Node.js v20** - If you are using a different version, please use `nvm` to install Node.js v20 for best results.
-- **npm v10** - Installed automatically with NodeJS v20
-
-## 2. Installation
-
-There are a few ways to set up your environment. Choose the one that best fits your needs:
-
-- [Using `create-ceramic-app`](#2a-installation-using-create-ceramic-app) - get up and running quickly with a basic ComposeDB application with one command. Good for the first quick experience with Ceramic and ComposeDB.
-- [Using the Wheel](#2b-installation-using-wheel) - the recommended and the easiest way to configure your full working environment and install the necessary dependencies.
-- [Using JavaScript package managers](#2c-installation-using-javascript-package-managers) - an alternative, more manual, way to configure your working environment which supports `npm`, `pnpm` and `yarn`.
-
-##### Install and start the `ceramic-one` binary
-
-All of the configuration options listed above **require a `ceramic-one` binary up and running**, which provides a data network access. You can run `ceramic-one` on your
-local machine using two simple steps listed below.
-
-:::note
-The instructions below cover the steps for the MacOS-based systems. If you are running on a Linux-based system, you can find the
-instructions [here](https://github.com/ceramicnetwork/rust-ceramic?tab=readme-ov-file#linux---debian-based-distributions).
-:::
-
-1. Install the component using [Homebrew](https://brew.sh/):
-
-```bash
-brew install ceramicnetwork/tap/ceramic-one
-```
-
-2. Start the `ceramic-one` using the following command:
-```bash
-ceramic-one daemon
-```
-
-:::note
-By default, the command above will spin off a node which connects to a `testnet-clay` network. You can change this behaviour by providing a `--network` flag and specifying a network of your choice. For example:
-
-```ceramic-one daemon --network testnet-clay```
-:::
-
-
-By default `ceramic-one` will store its data in the current directory. You can configure this behaviour by
-specifying the `--store-dir`and `--p2p-key-dir` arguments. For example:
-
-```bash
-ceramic-one daemon --store-dir ~/.ceramic-one --p2p-key-dir ~/.ceramic-one
-```
-
-
-With `ceramic-one` binary up and running you can move on with the ComposeDB installation and configuration method of your choice.
-
-
----
-
-### 2a. Installation using create-ceramic-app
-
-
-
-
**When to use**
-
When you want to get up and running quickly with a basic ComposeDB application with one command.
-
-
-
**Time to install**
-
Less than 2 minutes
-
-
-
-Just run the command below and follow the instructions:
-
-
-
-
-```powershell
-npx create-ceramic-app
-```
-
-
-
-
-```powershell
-pnpx create-ceramic-app
-```
-
-
-
-
-:::tip
-You need at least yarn 2.x to use the `yarn dlx` command. If you have an older version, upgrade it by running `yarn set version stable` and `yarn install`.
-
-Then you can run the following command to create a new Ceramic app using yarn 2.x
-:::
-
-```powershell
-yarn dlx create-ceramic-app
-```
-
-
-
-
-```powershell
-bunx create-ceramic-app
-```
-
-
-
-
----
-
-### 2b. Installation using Wheel
-
-
-
-
**When to use**
-
When you want to configure full working environment and start working on your own app.
-
-
-
**Time to install**
-
5 minutes
-
-
-
-The easiest and recommended way to configure your full local development environment is by using [Wheel](https://github.com/ceramicstudio/wheel.git) - a CLI starter tool for Ceramic that makes it easy to install necessary dependencies and run a Ceramic node enabled with ComposeDB. The installation instructions below are also covered in a video tutorial that you can follow:
-
-
-
-#### Install the dependencies
-
-In order to use Wheel, you’ll have to install a few dependencies:
-
-##### Node.js
-
-If you don’t already have them installed, you will need to install at least:
-
-- [**NodeJS v20**](https://nodejs.org/en/) - If you are using a different version, please use `nvm` to install Node.js v20 for best results.
-- **npm v10** - Installed automatically with NodeJS v20.
-
-Make sure you have the correct versions installed.
-
-```powershell
-node -v
-npm -v
-```
-
-##### jq
-
-`jq` is a lightweight and flexible command-line JSON processor. The installation method depends on your operating system. Install it using one of the methods defined in
-the [official tutorial here](https://stedolan.github.io/jq/download/).
-
-##### PostgreSQL (optional)
-
-PostgreSQL is only required for a production configuration on the Mainnet. If you are new to ComposeDB on Ceramic and would like to quickly test it out, you can skip the PostgreSQL installation and come back to it once you are ready to scale your project. You will need Postgres installed on your machine to store indexed data.
-
-To install Postgres, follow [instructions provided on official Postgres documentation](https://www.postgresql.org/download/).
-Once installed, open Postgres in your command line:
-
-```powershell
-psql postgres
-```
-
-Configure your database using the following commands:
-
-```SQL
-CREATE DATABASE ceramic;
-CREATE ROLE ceramic WITH PASSWORD 'password' LOGIN;
-GRANT ALL PRIVILEGES ON DATABASE "ceramic" to ceramic;
-```
-
-#### Configure the development environment
-Make sure you have the `ceramic-one` binary up and running. To do that, follow the steps listed [here](#2-installation).
-
-Now you can use Wheel to install all of the dependencies needed to run Ceramic and ComposeDB as well as configure the working environment
-for your project.
-
-To download Wheel, run the command below:
-
-```powershell
-curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/ceramicstudio/wheel/main/wheel.sh | bash
-```
-
-Once Wheel is downloaded, you are good to start configuring your project working directory. To kick it off, run the command below:
-
-```powershell
-./wheel
-```
-
-Wheel will ask you a few questions, allowing you to configure your entire working environment - from what Ceramic dependencies you’d like to install to how your Ceramic node should be configured.
-
-You can run the following command to learn more about available Wheel commands and options:
-
-```powershell
-./wheel --help
-```
-
-For developers who are completely new to Ceramic, we highly recommend starting the configuration with all the default options. This will install the Ceramic and ComposeDB dependencies and spin up a local node running `InMemory`.
-
-At the end of configuration, this option will also give you an option to set up an example web3 social app for you to interact with and test ComposeDB features.
-
-:::important
-
-[Ceramic Anchor Service (CAS)](./guides/composedb-server/access-mainnet.mdx) is used to anchor Ceramic streams on a blockchain.
-CAS is require for `dev`, `testnet-clay` and `mainnet` networks. Since `InMemory` option doesn’t use CAS, data generated for your project will not be persisted.
-
-:::
-
-If you are ready to dive into a more advanced configuration, head to [**Wheel reference**](../wheel/wheel-reference.mdx) page to learn more details about each parameter you can configure.
-
-
-
----
-
-### 2c. Installation using JavaScript package managers
-
-
-
-
**When to use**
-
When you want more control and a manual way to configure your working environment.
-
-
-
**Time to install**
-
5-10 minutes
-
-
-
-Another way to install the dependencies and configure Ceramic is using JavaScript package managers. This option requires more manual steps. The guide below covers this
-process step-by-step. If you have followed the [Wheel installation](#2b-installation-using-wheel) guide above, you can skip this section.
-
-#### Install the dependencies
-
-Start with creating the project directory. Here you’ll store all your app’s local files:
-
-```powershell
-mkdir my-project #creates a new directory
-cd my-project #targets the created directory
-```
-
-##### Node.js
-
-If you don’t already have them installed, you will need to install Node.js v20 and a package manager. We primarily use `pnpm`, but `npm` and `yarn` are supported as well.
-
-- [**NodeJS v20**](https://nodejs.org/en/) - If you are using a different version, please use `nvm` to install Node.js v20 for best results.
-- [**pnpm v10**](https://pnpm.io/installation)
-
-Make sure you have the correct versions installed.
-
-```powershell
-node -v
-pnpm -v
-```
-
-##### ceramic-one
-
-Make sure you have the `ceramic-one` binary up and running. To do that, follow the steps listed [here](#2-installation).
-
-##### Ceramic
-
-ComposeDB runs on Ceramic, so you will need to run a Ceramic node. To get started, we recommend running a local Ceramic node. If you're interested in running the production node, you can follow one of the [guides here](./guides/composedb-server/).
-
-Ceramic CLI provides a set of commands that make it easier to run and manage Ceramic nodes. Start by installing the Ceramic CLI:
-
-
-
-
-```powershell
-npm install --location=global @ceramicnetwork/cli
-```
-
-
-
-
-```powershell
-pnpm install -g @ceramicnetwork/cli
-```
-
-
-
-
-:::caution
-
-Global packages are only supported for yarn 2.x and older. For yarn 3.x and newer, use `yarn dlx` to run composedb cli commands
-
-:::
-
-```powershell
-yarn global add @ceramicnetwork/cli
-```
-
-
-
-
-##### ComposeDB
-
-Next install the ComposeDB CLI, which enables you to interact with ComposeDB data from your terminal:
-
-
-
-
-```powershell
-npm install --location=global @composedb/cli
-```
-
-
-
-
-```powershell
-pnpm add -g @composedb/cli
-```
-
-
-
-
-:::caution
-
-Global packages are only supported for yarn 2.x and older. For yarn 3.x and newer, use `yarn dlx` to run composedb cli commands
-
-:::
-
-```powershell
-yarn global add @composedb/cli
-```
-
-
-
-
-:::tip
-
-The command above will install the latest version of the ComposeDB CLI. If you need to install a specific version, you
-can specify it by adding `@version-number` at the end of this command. You can also prefix the version number with `^` to
-install the latest patch. For example, if you'd like to install the latest patched version of ComposeDB 0.6.x you can run the command:
-
-`npm install --location=global @composedb/cli@^0.6.x`
-
-:::
-
-ComposeDB provides two additional libraries that support development:
-
-1. [@composedb/devtools](https://composedb.js.org/docs/0.5.x/api/modules/devtools) containing utilities related to managing composites
-2. [@composedb/devtools-node](https://composedb.js.org/docs/0.5.x/api/modules/devtools_node) which contains utilities for interacting with the local file system and starting a local HTTP server.
-
-To install the development packages, run:
-
-
-
-
-```powershell
-npm install -D @composedb/devtools @composedb/devtools-node
-```
-
-
-
-
-```powershell
-pnpm add -D @composedb/devtools @composedb/devtools-node
-```
-
-
-
-
-```powershell
-yarn add -D @composedb/devtools@^0.5.0 @composedb/devtools-node@^0.5.0
-```
-
-
-
-
-#### Setup
-
-All dependencies are installed. Now you can start setting up your project. The first step is to run a local Ceramic node.
-
-##### Run a Ceramic node
-
-You can check that everything was installed correctly by spinning up a Ceramic node. Running the command below will start the Ceramic node in local mode and connect to Clay testnet.
-Indexing is a key component of ComposeDB, which syncs data across nodes. Enable indexing by toggling:
-
-
-
-
-```powershell
-npx @ceramicnetwork/cli daemon
-```
-
-
-
-
-```powershell
-pnpm dlx @ceramicnetwork/cli daemon
-```
-
-
-
-
-```powershell
-yarn dlx @ceramicnetwork/cli daemon
-```
-
-
-
-
-You should see the following output in your terminal. This means you have successfully started a local node and connected to Clay testnet 🚀
-
-```powershell
-IMPORTANT: Ceramic API running on 0.0.0.0:7007
-```
-
-#### Developer Account
-
-Now, that you have installed everything successfully and are able to run the node, let's create a developer account. You can stop
-the node for now by using the keyboard combination `Control+C`.
-
-##### Generate your private key
-
-You will need a private key for authorizing ComposeDB CLI commands in the later stages of development. You can generate it using the command below:
-
-```powershell
-composedb did:generate-private-key
-```
-
-You should see the output similar to the one below. Keep in mind that the key generated for your will be unique and will different from the example shown below:
-
-```bash
-✔ Generating random private key... Done!
-5c7d2fa8ebc488f2fe008e5ed1db7f1f95c203434bbcbeb703491c405f6f31f0
-```
-
-Copy and save this key securely for later use.
-
-:::important
-
-Store your private key securely - the key allows changes to be made to your app. In addition, you will need it throughout the app development process.
-
-:::
-
-##### Generate your account
-
-Indexing is one of the key features of ComposeDB. In order to notify the Ceramic node which models have to be indexed, the ComposeDB tools have to interact with the restricted Admin API. Calling the API requires an authenticated Decentralized Identifier (DID) to be provided in the node configuration file. Create a DID by running the following command, using the private key generated previously instead of the placeholder variable `your-private-key`:
-
-```powershell
-composedb did:from-private-key your-private-key
-```
-
-You should see the output similar to the one below. Here again, the DID created for you will be unique and will differ from the one shown below:
-
-```bash
-✔ Creating DID... Done!
-did:key:z6MkoDgemAx51v8w692aZRLPdwP6UPKj3EgUhBTvbL7hCwLu
-```
-
-This key will be used to configure your node in the later steps of this guide.
-
-:::important
-
-Copy this authenticated DID key and store it in a secure place, just like with your private key above. This DID key will have to be provided in your Ceramic node’s configuration file which will ensure that only authorized users can make changes to your application, e.g. deploy models on your Ceramic node.
-
-:::
-
-#### Using your account
-
-The very first time you spin up a Ceramic node, a node configuration file is automatically created for you where you can configure how your node is operated. Here you have to provide the DID key which is authorised to interact with the Admin API.
-The Ceramic node configuration file will be created inside of the automatically created directory `./ceramic` in your home directory (usually `/home/USERNAME/` on Linux or `/Users/USERNAME/` on Mac). This directory can be accessed using the following command:
-
-```powershell
-cd ~/.ceramic
-```
-
-Inside of this directory you should find the following files:
-
-- `daemon.config.json` - your Ceramic node configuration file
-- `statestore` - a local directory for [persisting the data](./guides/composedb-server/server-configurations#ceramic-state-store)
-
-Open the `daemon.config.json` file using your preferred code editor and provide the authenticated DID, generated in the [generate your account](#generate-your-account) step of this guide, in the `admin-dids` section of the file as shown in the example below:
-
-```json
-{
- ...
- "http-api": {
- ...
- "admin-dids": ["did:key:z6MkoDgemAx51v8w692aZRLPdwP6UPKj3EgUhBTvbL7hCwLu"]
- },
- "indexing": {
- ...
- "allow-queries-before-historical-sync": true
- }
-}
-```
-
-Save this file and start your Ceramic node again by following the steps in the [Confirmation](#confirmation) section of this guide.
-
-#### Confirmation
-
-As a final test, spin up the Ceramic local node:
-
-```powershell
-ceramic daemon --network=testnet-clay
-```
-
-Once again, you should see your local Ceramic node up and running as follows:
-
-```powershell
-IMPORTANT: Ceramic API running on 0.0.0.0:7007
-```
-
-By this point you should have your development environment and all configurations in place to get started working on your application.
-
-Congratulations!
-
----
-
-## 3. Frequently Asked Questions
-
-Some questions and issues come up more often than others. We've compiled a list of the most common ones here.
-
-
- Which setup method is better: Wheel or JavaScript package managers?
-
-
-
**create-ceramic-app** is the fastest. Good for your first interaction with ComposeDB.
-
- **Wheel** is the recommended and the easiest way to configure your working environment and install all the
- necessary dependencies. We highly recommended going with Wheel if you are just starting out with Ceramic.
- Everything will be taken care of for you.
-
-
- You might consider using **JavaScript package managers** if you are already familiar with Ceramic and need more
- manual configuration and control over your working environment.
-
-
-
-
-
- Which operating systems are supported?
-
-
- It's best to run Ceramic and ComposeDB on Linux or a Mac. You can also run it on Windows but you'll have to use
- WSL2 (Windows Subsystem for Linux). See the supported operating systems section at the top.
-
-
-
-
-
- Which Node.js version is preferred?
-
-
- We have seen the best results using Node.js v20. Earlier versions are no longer supported, later versions can
- cause issues for some users. While we're working on eliminating the issues, it's best to use Node v20 for now.
-
-
-
-
- How long does it take to install the packages?
-
-
- Installing everything (either with Wheel or JavaScript packages) takes usually between 2 and 10 minutes.
- Throughout the guide above you can find what kind of output you should be looking for to know that everything was
- installed correctly.
-
-
-
-
- Where in the system do I run all of the commands?
-
-
- Sometimes, especially when using JavaScript package managers to install Ceramic and ComposeDB, it's easy to forget
- that you need to run all of the commands in the app's directory. This directory is either automatically created
- for you when using Wheel, or you create it manually when using JavaScript package managers.
-
-
- When installing with JavaScript package managers you need to open 2-3 terminal windows and run different commands,
- so it's easy to miss that you can be in a wrong directory. Please make sure you run all the commands where they're
- supposed to run.
-
-
-
-
- Where can I find a Ceramic node configuration file, daemon.config.json?
-
-
- When installing ComposeDB with JavaScript package managers, at some point you need to edit your Ceramic node
- config file. By default, it's in your home directory, in .ceramic folder (*not* in the app directory). It's easy
- to miss this detail so please check the path. This command should take you to the right directory: cd ~/.ceramic
-
-
-
-
- How to restart a node after stopping it?
-
- When you use Wheel to install Ceramic and ComposeDB, it takes care of the whole installation process. But please
- note that Wheel is just an installer, not a node launcher. If you want to launch Ceramic and ComposeDB again, after
- you have stopped it, you need to launch Ceramic daemon again and then launch ComposeDB.
-
-
You can launch Ceramic daemon by running the following command: ceramic daeomn --network=InMemory
-
You can launch ComposeDB by running the command: composedb
-
- More on all of the composedb command options can be found in "2. Create your composite" section of this Getting
- Started guide.
-
-
-
- How do I interact with the data once Ceramic node is running?
-
-
- The easiest way to interact with data is through a GraphQL Server. You can find all the details on how to set it
- up, launch, and interact with your data in section of this guide, "3. Interact with data"
-
-
-
-
-
- Error when creating a composite: ✖ request to http://localhost:7007/(...) failed, reason: connect ECONNREFUSED
- ::1:7007
-
-
-
The most likely cause is using Node.js v18. Please try using Node.js v20.
-
-
-
- Error: npm ERR! code EACCESS
-
-
The most likely cause is read/write access on your system. Try running the command with sudo privileges.
-
-
-
- What if my question is not answered on this page?
-
-
-
- If your question is not answered in this guide, we recommend visiting our Community Forum (see the link in the
- footer). There, you can ask your question and get help from our community of developers and users. It's great to
- ask anything: from beginner to expert questions. The community and our developers are there to help you.
-
-
-
-
----
-
-## 4. Next Steps
-
-In this Quickstart guide, you have learned how to get started with ComposeDB. You have set up your development environment and are ready to start building your application. The next steps are:
-
-
-
-- [**Create your composite**](./create-your-composite) - Learn how to create your first composite, a reusable data model that can be used across different applications.
-- [**Interact with data**](./interact-with-data) - Learn how to interact with data in ComposeDB, from creating, reading, updating, and deleting data to running complex queries.
-- [**Core ComposeDB concepts**](./core-concepts) - Learn about the core concepts of ComposeDB, such as composites, schemas, and queries.
-- [**Running in the cloud**](./guides/composedb-server/running-in-the-cloud) - Ready to upgrade from a local node to production? Learn how to deploy your app.
diff --git a/docs/dids/authorization.md b/docs/dids/authorization.md
index 0e7328bd..230582c0 100644
--- a/docs/dids/authorization.md
+++ b/docs/dids/authorization.md
@@ -2,7 +2,7 @@
Authorize and then use DIDs where needed. At the moment, Ethereum and Solana accounts
are supported. Reference the chain/network specific libraries for more info on how to
-use each. Additional accounts will be supported in the future.
+use each. Additional accounts will be supported in the future.
Authorize with an Ethereum account using [@didtools/pkh-ethereum](https://did.js.org/docs/api/modules/pkh_ethereum):
@@ -32,9 +32,11 @@ const authMethod = await SolanaWebAuth.getAuthMethod(solProvider, accountId)
const session = await DIDSession.get(accountId, authMethod, { resources: [...]})
```
-With your session, use DIDs in composedb, ceramic & glaze libraries:
+With your session, use DIDs with Ceramic:
```js
-const ceramic = new CeramicClient()
-ceramic.did = session.did
+import { CeramicClient } from '@ceramic-sdk/http-client'
+
+const ceramic = new CeramicClient({ url: 'http://localhost:5101' })
+// Use session.did for authenticated operations
```
diff --git a/docs/dids/configuration.md b/docs/dids/configuration.md
index 14a63bda..8b9234ac 100644
--- a/docs/dids/configuration.md
+++ b/docs/dids/configuration.md
@@ -1,31 +1,24 @@
# Configuration
When creating a DID session, you need to pass an array of string identifiers for resources you want to authorize
-for. In the context of the Ceramic Network, resources are an array of Model Stream Ids or Streams Ids. Typically
-you will just pass resources from the `@composedb` libraries as you will already manage your Composites and Models
-there. For example:
+for. In the context of the Ceramic Network, resources are an array of Model Stream IDs or Stream IDs.
```js
-import { ComposeClient } from '@composedb/client'
+import { DIDSession } from 'did-session'
-//... Reference above and `@composedb` docs for additional configuration here
-
-const client = new ComposeClient({ceramic, definition})
-const resources = client.resources
-const session = await DIDSession.get(accountId, authMethod, { resources })
-client.setDID(session.did)
+const session = await DIDSession.get(accountId, authMethod, {
+ resources: ['kjzl6hvfrbw6c...'] // Model stream IDs
+})
```
-If you are still using `@glazed` libraries and tile document streams you will typically pass a wildcard resource,
-this all allows 'access all'. While not ideal, there is technical limits in `@glazed` libraries and tile document
-streams that make it difficult to offer more granular permission access to sets of stream. Authorization was mostly
-designed with model document streams and `@composedb` libraries in mind. Wildcard resource may not be supported in
-the future.
+You can also pass a wildcard resource to allow access to all streams:
```js
const session = await DIDSession.get(accountId, authMethod, { resources: [`ceramic://*`]})
```
+## Session Expiration
+
By default a session will expire in 1 week. You can change this time by passing the `expiresInSecs` option to
indicate how many seconds from the current time you want this session to expire.
diff --git a/docs/dids/guides/using-with-composedb-client.md b/docs/dids/guides/using-with-composedb-client.md
deleted file mode 100644
index 198bb0d5..00000000
--- a/docs/dids/guides/using-with-composedb-client.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Using With ComposeDB Client
-
-[ComposeDB](https://composedb.js.org) is a set of TypeScript libraries and tools to interact with the [Dataverse](https://blog.ceramic.network/into-the-dataverse/) using the [Ceramic network](https://ceramic.network/).
-
-First, you should start with creating your instance of `ComposeClient` from `@composedb/client` package, passing it the
-url to the ceramic node you want to use and the runtime composite definition of the composite you want to use in your App.
-
-```js
-import { ComposeClient } from '@composedb/client'
-import { definition } from './__generated__/definition.js'
-
-const compose = new ComposeClient({ ceramic: 'http://localhost:7007', definition })
-```
-
-Next, you can create a DID Session, passing it the resources from your client instance. The resources are a list of model
-stream IDs from your runtime composite definition:
-
-```js
-import { DIDSession } from 'did-session'
-import type { AuthMethod } from '@didtools/cacao'
-import { EthereumWebAuth, getAccountId } from '@didtools/pkh-ethereum'
-
-const ethProvider = // import/get your web3 eth provider
-const addresses = await ethProvider).request({ method: 'eth_requestAccounts' })
-const accountId = await getAccountId(ethProvider, addresses[0])
-const authMethod = await EthereumWebAuth.getAuthMethod(ethProvider, accountId)
-
-const loadSession = async(authMethod: AuthMethod, resources: Array):Promise => {
- return DIDSession.authorize(authMethod, { resources })
-}
-
-const session = await loadSession(authMethod, compose.resources)
-```
-
-Next, you can assign the authorized did from your session to your client.
-
-```js
-compose.setDID(session.did)
-
-// use the compose instance to make queries in ComposeDB graph
-```
-
-Before you start making mutations with the client instance, you should make sure that the session is not expired
-```js
-// before compose mutations, check if session is still valid, if expired, create new
-if (session.isExpired) {
- const session = loadSession(authMethod)
- compose.setDID(session.did)
-}
-
-// continue to make mutations
-```
-
-A typical pattern is to store a serialized session in local storage and load on use if available.
-
-:::caution Warning
-LocalStorage is used for illustrative purposes here and may not be best for your app, as
-there is a number of known issues with storing secret material in browser storage. The session string
-allows anyone with access to that string to make writes for that user for the time and resources that
-session is valid for. How that session string is stored and managed is the responsibility of the application.
-:::
-
-```js
-// An updated version of loadSession(...)
-const loadSession = async(authMethod: AuthMethod, resources: Array):Promise => {
- const sessionStr = localStorage.getItem('didsession')
- let session
-
- if (sessionStr) {
- session = await DIDSession.fromSession(sessionStr)
- }
-
- if (!session || (session.hasSession && session.isExpired)) {
- session = await DIDSession.authorize(authMethod, { resources })
- localStorage.setItem('didsession', session.serialize())
- }
-
- return session
-}
-```
diff --git a/docs/ecosystem/community.mdx b/docs/ecosystem/community.mdx
index 3e005e50..99bf5d1f 100644
--- a/docs/ecosystem/community.mdx
+++ b/docs/ecosystem/community.mdx
@@ -1,38 +1,23 @@
# Community
-Explore the many ways to connect, learn, and participate in the ComposeDB community.
+Explore the many ways to connect, learn, and participate in the Ceramic community.
## Chat and Discussion
- **Forum** - Visit the [Ceramic Forum](https://forum.ceramic.network) to get help, ask questions, and discuss improvements.
- **Discord** - Join the [Ceramic Discord](https://chat.ceramic.network) to join general discussions, share your projects, and meet your fellow community members.
-- **Bounties** - Get paid for learning & building on Ceramic, in the [#bounties](https://discord.com/channels/682786569857662976/1040706471689732096/1040719165268426924) channel
## Social Media
-- **Twitter** - Follow us on Twitter at [@ComposeDB](https://twitter.com/ceramicnetwork) for timely updates.
+- **Twitter** - Follow us on Twitter at [@ceramicnetwork](https://twitter.com/ceramicnetwork) for timely updates.
- **YouTube** - Subscribe to the [Ceramic YouTube](https://www.youtube.com/channel/UCgCLq5dx7sX-yUrrEbtYqVw) to watch talks, tutorials, events, and more.
## Resources
- **Blog** - Read the [Ceramic Blog](https://blog.ceramic.network) to discover updates and educational content for developers.
- **Newsletter** - Subscribe to the [Ceramic Newsletter](https://blog.ceramic.network/#/portal/signup) to receive important announcements.
-## Events
-- **Community Calls**
-- **Calendar**
-
## Ways to Contribute
-### Ecosystem Grants
-If you’re interested in a grant feel free to reach out in the #bounties channel. We welcome all submissions, but we’re especially looking for contributions in the following areas:
-
-- **Marketplace GUI:** Wouldn’t it be great if developers could access the composite marketplace with an app UI instead of a terminal?
-- **Sample Apps & Models:** Inspire other developers by building sample apps & data models, like reputation credentials, social apps, or DAO tools.
-- **Easy Node Setup:** Make it easy to deploy ComposeDB on a node - we wrote a [proof of concept](https://github.com/ceramicstudio/ceramic-infra-poc) you can build on. Bonus: Terraform templates for cloud providers like AWS, GCP, and DigitalOcean.
-- **Config Script:** We’d love to make [Set up your environment](../composedb/set-up-your-environment) even easier. Want to create a lightning fast script?
-### **Open source contributions**
-Contribute to the [ComposeDB repository](https://github.com/ceramicstudio/js-composedb): here are some packages to get started
-- [composedb/client](https://github.com/ceramicstudio/js-composedb/tree/main/packages/client)
-- [composedb/cli](https://github.com/ceramicstudio/js-composedb/tree/main/packages/cli)
-We recommend getting in touch on the [#ComposeDB](https://discord.com/channels/682786569857662976/1045002408671068220) Discord channel before diving in.
+### Open Source Contributions
+Contribute to the [Ceramic One repository](https://github.com/ceramicnetwork/rust-ceramic).
-### **Project showcase**
+### Project Showcase
- Reach out on the Ceramic Discord channel [#share-your-project](https://discord.com/channels/682786569857662976/801569389044039710) to show off your project on Ceramic
-- See a list here of the [most popular projects](https://threebox.notion.site/Ceramic-Ecosystem-a3a7a58f81544d33ad3feb84368775d4).
+- See a list of [projects in the ecosystem](https://threebox.notion.site/Ceramic-Ecosystem-a3a7a58f81544d33ad3feb84368775d4).
diff --git a/docs/introduction/ceramic-roadmap.md b/docs/introduction/ceramic-roadmap.md
deleted file mode 100644
index ea053e35..00000000
--- a/docs/introduction/ceramic-roadmap.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# Ceramic Roadmap
-
-
-
-
-
-
-Since the launch of the ComposeDB Beta, the core Ceramic team remains committed to making ongoing improvements
-to both ComposeDB and the underlying Ceramic protocol. Concurrently, we seek to involve the Ceramic developer
-community in shaping Ceramic's future. We value your active participation in helping us prioritize the features
-and improvements that matter most to our developer base.
-
-**All current and future projects are outlined in the [Ceramic roadmap](https://github.com/orgs/ceramicstudio/projects/2).**
-
-We welcome your feedback and insights on our roadmap priorities. You can show your support or express your concerns
-about projects on the roadmap by upvoting or downvoting them. Additionally, we encourage you to leave more detailed
-comments, making suggestions or indicating relevant feature requests.
\ No newline at end of file
diff --git a/docs/introduction/composedb-overview.md b/docs/introduction/composedb-overview.md
deleted file mode 100644
index f21d0f5f..00000000
--- a/docs/introduction/composedb-overview.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# ComposeDB
-
-
-ComposeDB is a composable graph database built on [Ceramic](https://ceramic.network), designed for Web3 applications.
-
-### Use Cases
-| Use Case | Examples |
-|---|---|
-|__Decentralized identity__| `user profiles` `credentials` `reputation systems` |
-|__Web3 social__| `social graphs` `posts` `reactions` `comments` `messages` |
-|__DAO tools__| `proposals` `projects` `tasks` `votes` `contribution graphs` |
-|__Open information graphs__| `DeSci graphs` `knowledge graphs` `discourse graphs` |
-
-### Why ComposeDB?
-
-- Store and query data with powerful, easy-to-use GraphQL APIs
-- Build faster with a catalog of plug-and-play schemas
-- Bootstrap content by plugging into a composable data ecosystem
-- Deliver great UX with sign-in with Ethereum, Solana, and more
-- Eliminate trust and guarantee data verifiability
-- Scale your Web3 data infrastructure beyond L1 or L2 blockchains
-
-### Project Status: `Beta`
-
-ComposeDB officially entered `Beta` on February 28, 2023. What does this mean?
-
-- You can now build and deploy apps to production on mainnet!
-- Core features like GraphQL APIs, reusable models, and data composability are available
-- We will continue to improve performance and add more features
-- We are not yet guaranteeing a 100% stable, bug-free platform
-
-If you want to provide feedback, request new features, or report insufficient performance, please [make a post on the Forum](https://forum.ceramic.network/), as we'd like to work with you.
-Thank you for being a ComposeDB pioneer and understanding that great Web3 protocols take time to mature.
-
----
-
-
-### [Get Started →](../composedb/getting-started)
-Build a Hello World application and interact from the CLI.
-
-### [Development Guides →](../composedb/guides)
-Learn about data modeling, application set up, and data interactions.
-
-
-### [Core concepts →](../composedb/core-concepts)
-Dive deeper into the ComposeDB protocol and its components.
-
-### [Community →](../ecosystem/community)
-Connect with the ComposeDB developer community.
diff --git a/docs/introduction/intro.md b/docs/introduction/intro.md
index dce446a1..e4a18fb3 100644
--- a/docs/introduction/intro.md
+++ b/docs/introduction/intro.md
@@ -1,6 +1,6 @@
# The Composable Data Network
-Ceramic is a decentralized data network that powers an ecosystem of interoperable Web3 applications and services. Ceramic's event streaming protocol is a highly-scalable decentralized data infrastructure used for building all kinds of interoperable Web3 services and protocols, such as decentralized databases. Ceramic-powered databases and services enable thousands of Web3 developers to build data-intensive applications and solve the world's most complex data challenges. By decentralizing application databases, Ceramic makes data composable and reusable across all applications.
+Ceramic is a decentralized data network that powers an ecosystem of interoperable Web3 applications and services. Ceramic's event streaming protocol is a highly-scalable decentralized data infrastructure used for building all kinds of interoperable Web3 services and protocols. By decentralizing application databases, Ceramic makes data composable and reusable across all applications.

@@ -8,26 +8,32 @@ Ceramic is a decentralized data network that powers an ecosystem of interoperabl
## Introduction to Ceramic
---
-- Head to the [**ComposeDB**](./composedb-overview.md) section to learn more about stream-level Ceramic functionality.
+- Head to [**Ceramic One**](../protocol/ceramic-one/) to get started with Ceramic
-- Head to the [**Ceramic Protocol**](./protocol-overview.md) section to learn about lower-level Ceramic functionality
+- Learn about [**Ceramic Protocol**](./protocol-overview.md) concepts and architecture
+
+- Explore [**Decentralized Identifiers**](./did-overview.md) for user authentication
- Explore use cases and projects [**built on Ceramic**](https://threebox.notion.site/Ceramic-Ecosystem-a3a7a58f81544d33ad3feb84368775d4)
-## Build Applications
+## Get Started with Ceramic One
---
-### [**ComposeDB: Graph DB for Web3 Apps →**](../composedb/getting-started)
+### [**Ceramic One: The Rust Implementation →**](../protocol/ceramic-one/)
-ComposeDB is a decentralized graph database powered by Ceramic that enables you to build powerful Web3 applications using composable data, GraphQL, and reusable models. ComposeDB is the newest and most popular database built on Ceramic.
+Ceramic One is the next-generation Ceramic node written in Rust. It provides:
+- High-performance event streaming and synchronization
+- Efficient data pipeline with Flight SQL queries
+- Self-anchoring capabilities for EVM blockchains
+- Native support for models and model instance documents
## Run a Ceramic Node
---
-Run a Ceramic node to provide data storage, compute, and bandwidth for your Ceramic application. Today there are no tokenized incentives for running a Ceramic node, but by running a node you can ensure the data for your app remains available while helping contribute to the network's decentralization.
+Run a Ceramic node to provide data storage, compute, and bandwidth for your Ceramic application. By running a node you can ensure the data for your app remains available while helping contribute to the network's decentralization.
-- [**Run Ceramic in the cloud**](../protocol/js-ceramic/guides/ceramic-nodes/running-cloud)
+- [**Install Ceramic One**](../protocol/ceramic-one/usage/installation)
-- [**Run Ceramic locally**](../protocol/js-ceramic/guides/ceramic-nodes/running-locally)
+- [**Configure Self-Anchoring**](../protocol/ceramic-one/anchoring/overview)
diff --git a/docs/introduction/protocol-overview.md b/docs/introduction/protocol-overview.md
index 8ffcbb85..3cb7d380 100644
--- a/docs/introduction/protocol-overview.md
+++ b/docs/introduction/protocol-overview.md
@@ -1,7 +1,7 @@
# Ceramic Protocol
-Ceramic is a decentralized event streaming protocol that enables developers to build decentralized databases, distributed compute pipelines, and authenticated data feeds, etc. Ceramic nodes can subscribe to subsets of streams forgoing the need of a global network state. This makes Ceramic an eventually consistent system (as opposed to strongly consistent like L1 blockchains), enabling web scale applications to be built reliably.
+Ceramic is a decentralized event streaming protocol that enables developers to build decentralized databases, distributed compute pipelines, and authenticated data feeds. Ceramic nodes can subscribe to subsets of streams forgoing the need of a global network state. This makes Ceramic an eventually consistent system (as opposed to strongly consistent like L1 blockchains), enabling web scale applications to be built reliably.
## Core Components
@@ -10,44 +10,29 @@ Ceramic is a decentralized event streaming protocol that enables developers to b
The Ceramic protocol consists of the following components:
-- [**Streams →**](../protocol/js-ceramic/streams/streams-index)
-- [**Accounts →**](../protocol/js-ceramic/accounts/accounts-index.md)
-- [**Networking →**](../protocol/js-ceramic/networking/networking-index.md)
-- [**Ceramic API →**](../protocol/js-ceramic/api.md)
-- [**Ceramic Nodes →**](../protocol/js-ceramic/nodes/overview.md)
+- [**Concepts →**](../protocol/ceramic-one/concepts) - Understand events, streams, interests, and the data pipeline
+- [**Installation →**](../protocol/ceramic-one/usage/installation) - Get started with Ceramic One
+- [**Producing Events →**](../protocol/ceramic-one/usage/produce) - Create and update streams
+- [**Consuming Events →**](../protocol/ceramic-one/usage/consume) - Subscribe and read from streams
+- [**Querying Data →**](../protocol/ceramic-one/usage/query) - Use Flight SQL to query the pipeline
-## Specification Status
+## Self-Anchoring
---
-| Section | State |
-| --- | --- |
-| [1. Streams](../protocol/js-ceramic/streams/streams-index) | **[Draft/WIP](../protocol/js-ceramic/streams/streams-index)** |
-| [1.1. Event Log](../protocol/js-ceramic/streams/event-log) | **[Reliable](../protocol/js-ceramic/streams/event-log)** |
-| [1.2. URI Scheme](../protocol/js-ceramic/streams/uri-scheme) | **[Reliable](../protocol/js-ceramic/streams/uri-scheme)** |
-| [1.3. Consensus](../protocol/js-ceramic/streams/consensus) | **[Draft/WIP](../protocol/js-ceramic/streams/consensus)** |
-| [1.4. Lifecycle](../protocol/js-ceramic/streams/lifecycle) | **[Reliable](../protocol/js-ceramic/streams/lifecycle)** |
-| [2. Accounts](../protocol/js-ceramic/accounts/accounts-index) | **[Draft/WIP](../protocol/js-ceramic/accounts/accounts-index)** |
-| [2.1. Decentralized Identifiers](../protocol/js-ceramic/accounts/decentralized-identifiers) | **[Draft/WIP](../protocol/js-ceramic/accounts/decentralized-identifiers)** |
-| [2.2. Authorizations](../protocol/js-ceramic/accounts/authorizations) | **[Reliable](../protocol/js-ceramic/accounts/authorizations)** |
-| [2.3. Object-Capabilities](../protocol/js-ceramic/accounts/object-capabilities) | **[Draft/WIP](../protocol/js-ceramic/accounts/object-capabilities)** |
-| [3. Networking](../protocol/js-ceramic/networking/networking-index) | **[Draft/WIP](../protocol/js-ceramic/networking/networking-index)** |
-| [3.1. Tip Gossip](../protocol/js-ceramic/networking/tip-gossip) | **[Reliable](../protocol/js-ceramic/networking/tip-gossip)** |
-| [3.2. Tip Queries](../protocol/js-ceramic/networking/tip-queries) | **[Reliable](../protocol/js-ceramic/networking/tip-queries)** |
-| [3.3. Event Fetching](../protocol/js-ceramic/networking/event-fetching) | **[Reliable](../protocol/js-ceramic/networking/event-fetching)** |
-| [3.4. Network Identifiers](../protocol/js-ceramic/networking/networks) | **[Reliable](../protocol/js-ceramic/networking/networks)** |
-| [4. API](../protocol/js-ceramic/api) | **[Missing](../protocol/js-ceramic/api)** |
-| [5. Nodes](../protocol/js-ceramic/nodes/overview) | **[Draft/WIP](../protocol/js-ceramic/nodes/overview)** |
-
-#### **Legend**
-
-| Spec state | Label |
-| --- | --- |
-| Unlikely to change in the foreseeable future. | **Stable** |
-| All content is correct. Important details are covered. | **Reliable** |
-| All content is correct. Details are being worked on. | **Draft/WIP** |
-| Do not follow. Important things have changed. | **Incorrect** |
-| No work has been done yet. | **Missing** |
+Ceramic One supports self-anchoring to EVM blockchains, allowing you to run your own anchor service:
+- [**Self-Anchoring Overview →**](../protocol/ceramic-one/anchoring/overview)
+- [**EVM Configuration →**](../protocol/ceramic-one/anchoring/evm-configuration)
+
+## Authentication
+
+---
+
+Ceramic uses Decentralized Identifiers (DIDs) for authentication:
+
+- [**DIDs Introduction →**](../dids/introduction)
+- [**Authorization →**](../dids/authorization)
+- [**Managing Sessions →**](../dids/managing-sessions)
diff --git a/docs/introduction/technical-reqs.md b/docs/introduction/technical-reqs.md
index 6fe85116..f3578045 100644
--- a/docs/introduction/technical-reqs.md
+++ b/docs/introduction/technical-reqs.md
@@ -6,24 +6,16 @@ Ceramic is a decentralized data storage network made up of different components,

-
-To make it easier to grasp, you can think about implementing Ceramic just like you might think about implementing a traditional SQL or PostgreSQL database.
-
When integrating with Ceramic, you will be running a few different services and components, each serving a specific purpose for running your application:
-- `js-ceramic` - provides the HTTP API access for connected clients to read the streams stored on the Ceramic network
-- `ceramic-one` - responsible for storing the actual data and coordinate with network participants.
-- `PostgreSQL` - used for indexing data
-- `Ethereum RPC node API access` - required to validate Ceramic Anchor Service (CAS) anchors.
-- `Ceramic Anchor Service (CAS) access` - Anchors Ceramic protocol proofs to the blockchain. This service is currently funded by 3box Labs, however, eventually, this function will be provided by node operators and with some expected cost.
+- `ceramic-one` - the Ceramic node written in Rust, responsible for storing data, providing HTTP API access, and coordinating with network participants
+- `EVM RPC node access` - required for self-anchoring to EVM blockchains (optional, but recommended for production)
-Ceramic nodes are simply pieces of software than run on a server. PostgreSQL is a type of traditional database.
+With self-anchoring support, you can run your own anchor service on any EVM-compatible blockchain instead of relying on external services. See [Self-Anchoring](../protocol/ceramic-one/anchoring/overview) for more details.
## Hardware requirements
-For most projects, all three components of Ceramic can be run on the same server. Thus the main consideration impacting costs are the hardware requirements of your server.
-
-Depending on the expected throughput of your project, the suggested hardware requirements will differ. Below, you can find the estimated hardware requirements based on a different levels of expected throughput.
+Depending on the expected throughput of your project, the suggested hardware requirements will differ. Below, you can find the estimated hardware requirements based on different levels of expected throughput.
### Minimum (light throughput)
@@ -33,8 +25,6 @@ Depending on the expected throughput of your project, the suggested hardware req
| RAM | 4GB |
| Storage | 110GB |
-
-
### Recommended
As your project scales, you may need to expand your storage beyond 180GB.
@@ -45,72 +35,17 @@ As your project scales, you may need to expand your storage beyond 180GB.
| RAM | 8 GB |
| Storage | 180GB |
-
### Advanced (heavy throughput)
-Advanced users may want to consider running the PostgreSQL database on a different server than the Ceramic node. If you choose to run them on different servers, a VPC can be used to establish the communication between them.
-
-
-
-
-
-
Ceramic node
-
-
-
-
Resource
-
Size
-
-
-
-
-
CPU
-
2 4CPU Cores
-
-
-
RAM
-
8 GB
-
-
-
Storage
-
180GB
-
-
-
-
-
-
-
PostgreSQL DB
-
-
-
-
Resource
-
Size
-
-
-
-
-
CPU
-
1 2CPU Cores
-
-
-
RAM
-
4 GB
-
-
-
Storage
-
110GB
-
-
-
-
-
-
-
+| Resource | Size |
+| --- | --- |
+| CPU | 4 CPU Cores |
+| RAM | 8 GB |
+| Storage | 180GB+ |
## Hosting solutions and costs
-One of the key factors impacting costs is how you choose to host your Ceramic node. A few options are shown below. Monthly server costs are **estimated** based on the hardware requirements above.
+One of the key factors impacting costs is how you choose to host your Ceramic node. A few options are shown below. Monthly server costs are **estimated** based on the hardware requirements above.
@@ -134,5 +69,3 @@ One of the key factors impacting costs is how you choose to host your Ceramic no
- Application developers who prefer to use third party managed node services can offload node management responsibilities to dedicated professionals
-
-
diff --git a/docs/protocol/ceramic-one/anchoring/evm-configuration.mdx b/docs/protocol/ceramic-one/anchoring/evm-configuration.mdx
new file mode 100644
index 00000000..ae56fa2c
--- /dev/null
+++ b/docs/protocol/ceramic-one/anchoring/evm-configuration.mdx
@@ -0,0 +1,226 @@
+---
+title: EVM Blockchain Configuration
+description: Configure self-anchoring for any EVM-compatible blockchain
+---
+
+# EVM Blockchain Configuration
+
+This guide covers the configuration options for running self-anchoring on EVM-compatible blockchains.
+
+## Anchor Contract
+
+Self-anchoring requires an anchor contract deployed on your chosen EVM blockchain.
+
+### Pre-deployed Contracts
+
+The anchor contract is already deployed on the following networks:
+
+| Network | Chain ID | Contract Address |
+|---------|----------|------------------|
+| Ethereum Mainnet | 1 | `0x231055A0852D67C7107Ad0d0DFeab60278fE6AdC` |
+| Gnosis Chain | 100 | `0x231055A0852D67C7107Ad0d0DFeab60278fE6AdC` |
+
+You can use these addresses directly with `--evm-contract-address` without deploying your own contract.
+
+### Deploying to Other Networks
+
+To deploy the anchor contract on a different EVM chain:
+
+The contract source code is available at:
+[ceramicnetwork/ceramic-anchor-service/contracts](https://github.com/ceramicnetwork/ceramic-anchor-service/tree/develop/contracts)
+
+The contract (`CeramicAnchorServiceV2.sol`) implements [CIP-110](https://github.com/ceramicnetwork/CIPs/blob/main/CIPs/cip-110.md) for indexable anchors.
+
+**Deployment steps:**
+
+1. Clone the repository:
+ ```bash
+ git clone https://github.com/ceramicnetwork/ceramic-anchor-service.git
+ cd ceramic-anchor-service/contracts
+ ```
+
+2. Install dependencies (requires [Foundry](https://book.getfoundry.sh/getting-started/installation)):
+ ```bash
+ make installDeps
+ make build
+ ```
+
+3. Deploy to your network:
+ ```bash
+ export ETH_WALLET_PK=your-private-key
+ export ETH_RPC_HOST=https://rpc.yournetwork.com
+ export ETH_RPC_PORT=443
+ make create
+ ```
+
+4. Note the deployed contract address from the output for use with `--evm-contract-address`.
+
+## Required Options
+
+All of these options must be provided together to enable self-anchoring:
+
+| Option | Environment Variable | Description |
+|--------|---------------------|-------------|
+| `--evm-rpc-url` | `CERAMIC_ONE_EVM_RPC_URL` | RPC endpoint URL for submitting anchor transactions and verifying anchor proofs |
+| `--evm-private-key` | `CERAMIC_ONE_EVM_PRIVATE_KEY` | Private key in hex format (without 0x prefix) |
+| `--evm-chain-id` | `CERAMIC_ONE_EVM_CHAIN_ID` | Network/chain ID (e.g., 1 for Ethereum mainnet) |
+| `--evm-contract-address` | `CERAMIC_ONE_EVM_CONTRACT_ADDRESS` | Deployed anchor contract address (see [Deploying the Anchor Contract](#deploying-the-anchor-contract)) |
+
+The `--evm-rpc-url` is used both for submitting anchor transactions and for verifying anchor proofs on that chain.
+
+## Optional Settings
+
+| Option | Environment Variable | Default | Description |
+|--------|---------------------|---------|-------------|
+| `--evm-confirmations` | `CERAMIC_ONE_EVM_CONFIRMATIONS` | 4 | Number of block confirmations before anchor is final |
+| `--anchor-interval` | `CERAMIC_ONE_ANCHOR_INTERVAL` | 3600 | Anchoring frequency in seconds |
+| `--additional-chain-rpc-urls` | `CERAMIC_ONE_ADDITIONAL_CHAIN_RPC_URLS` | - | Comma-separated list of RPC URLs for validating anchors from other chains |
+
+### Validating Anchors from Other Chains
+
+If your node syncs events from the network that were anchored on different chains (e.g., historical anchors or events from other nodes), you can provide additional RPC endpoints to validate those proofs:
+
+```bash
+ceramic-one daemon \
+ --evm-rpc-url https://rpc.gnosis.io \
+ --additional-chain-rpc-urls https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY,https://polygon-rpc.com \
+ ...
+```
+
+The node uses the first valid RPC for each chain ID.
+
+## Configuration Methods
+
+### CLI Flags
+
+```bash
+ceramic-one daemon \
+ --evm-rpc-url https://rpc.yournetwork.com \
+ --evm-private-key abcd1234... \
+ --evm-chain-id 12345 \
+ --evm-contract-address 0x1234567890abcdef... \
+ --evm-confirmations 4 \
+ --anchor-interval 3600
+```
+
+### Environment Variables
+
+```bash
+export CERAMIC_ONE_EVM_RPC_URL=https://rpc.yournetwork.com
+export CERAMIC_ONE_EVM_PRIVATE_KEY=abcd1234...
+export CERAMIC_ONE_EVM_CHAIN_ID=12345
+export CERAMIC_ONE_EVM_CONTRACT_ADDRESS=0x1234567890abcdef...
+export CERAMIC_ONE_EVM_CONFIRMATIONS=4
+export CERAMIC_ONE_ANCHOR_INTERVAL=3600
+
+ceramic-one daemon
+```
+
+### Docker Configuration
+
+```yaml
+version: '3.8'
+
+services:
+ ceramic-one:
+ image: public.ecr.aws/r5b3e0r5/3box/ceramic-one:latest
+ network_mode: "host"
+ environment:
+ - CERAMIC_ONE_EVM_RPC_URL=https://rpc.yournetwork.com
+ - CERAMIC_ONE_EVM_PRIVATE_KEY=${EVM_PRIVATE_KEY}
+ - CERAMIC_ONE_EVM_CHAIN_ID=12345
+ - CERAMIC_ONE_EVM_CONTRACT_ADDRESS=0x...
+ - CERAMIC_ONE_EVM_CONFIRMATIONS=4
+ - CERAMIC_ONE_ANCHOR_INTERVAL=3600
+ volumes:
+ - ceramic-one-data:/root/.ceramic-one
+
+volumes:
+ ceramic-one-data:
+ driver: local
+```
+
+## Security Considerations
+
+### Private Key Management
+
+- **Never commit private keys** to version control
+- Use environment variables or a secrets manager for production deployments
+- Consider using a dedicated wallet with limited funds for anchoring
+- Rotate keys periodically according to your security policies
+
+### RPC Endpoint Security
+
+- Use HTTPS endpoints only
+- Consider running your own RPC node for production workloads
+- Be aware of rate limits on public RPC providers
+- Monitor RPC endpoint availability
+
+### Wallet Funding
+
+- Ensure the wallet has sufficient funds for transaction fees
+- Set up monitoring and alerts for low balances
+- Consider the transaction costs of your chosen network when planning
+
+## Configuration Tips
+
+### Choosing Confirmation Count
+
+The `--evm-confirmations` setting determines how many blocks must be mined after your anchor transaction before it's considered final:
+
+- **Lower values (1-2)**: Faster finality, slightly higher reorganization risk
+- **Default (4)**: Good balance for most networks
+- **Higher values (6+)**: Maximum security, slower finality
+
+Adjust based on your network's block time and finality guarantees.
+
+### Anchor Interval
+
+The `--anchor-interval` controls how frequently anchoring occurs:
+
+- **Shorter intervals**: More frequent anchoring, higher costs, faster timestamp verification
+- **Longer intervals**: Lower costs, events wait longer for anchoring
+
+Choose based on your application's requirements for timestamp verification speed versus cost.
+
+## Migration from Remote Anchor Service
+
+If you were previously using the `--remote-anchor-service-url` option, this is now deprecated in favor of self-anchoring.
+
+To migrate:
+1. Remove the `--remote-anchor-service-url` configuration
+2. Choose a network and use a [pre-deployed contract](#pre-deployed-contracts) or [deploy your own](#deploying-to-other-networks)
+3. Ensure your wallet is funded on your chosen network
+4. Add the EVM configuration options described above
+
+## Troubleshooting
+
+### Transaction Failures
+
+If anchor transactions are failing:
+- Verify the wallet has sufficient funds for gas fees
+- Check that the RPC endpoint is accessible and responsive
+- Confirm the chain ID matches the network your RPC endpoint connects to
+- Ensure the contract address is correct for your network
+
+### Slow Anchoring
+
+If anchoring is taking longer than expected:
+- Check network congestion on your chosen blockchain
+- Verify the RPC endpoint is performant
+- Consider adjusting `--anchor-interval` for your use case
+
+### Connection Issues
+
+If Ceramic One cannot connect to the RPC endpoint:
+- Test the endpoint URL directly with curl or another HTTP client
+- Check for firewall rules blocking outbound connections
+- Verify the endpoint supports the required JSON-RPC methods
+
+## Transaction Recovery
+
+Ceramic One automatically handles transaction recovery for pending anchor operations from previous runs. If the daemon is restarted while anchoring transactions are pending, it will:
+
+1. Detect pending transactions from the previous session
+2. Monitor their status on the blockchain
+3. Complete the anchoring process once transactions are confirmed
diff --git a/docs/protocol/ceramic-one/anchoring/overview.mdx b/docs/protocol/ceramic-one/anchoring/overview.mdx
new file mode 100644
index 00000000..71b5bd31
--- /dev/null
+++ b/docs/protocol/ceramic-one/anchoring/overview.mdx
@@ -0,0 +1,82 @@
+---
+title: Self-Anchoring Overview
+description: Run your own anchor service on any EVM-compatible blockchain
+---
+
+# Self-Anchoring Overview
+
+Ceramic One supports self-anchoring, allowing you to run your own anchor service instead of relying on an external Ceramic Anchor Service (CAS). This gives you full control over your anchoring infrastructure and supports any EVM-compatible blockchain.
+
+## What is Anchoring?
+
+Anchoring is the process of writing cryptographic commitments to a blockchain, providing a verifiable timestamp and ordering for Ceramic events. This ensures data integrity and enables conflict resolution across the decentralized network.
+
+When events are anchored:
+- They receive a blockchain-verified timestamp
+- Their ordering becomes globally verifiable
+- Conflicts can be deterministically resolved
+
+## Benefits of Self-Anchoring
+
+- **Independence**: No reliance on external anchor services
+- **Flexibility**: Use any EVM-compatible blockchain
+- **Control**: Configure anchoring frequency and confirmation requirements
+- **Cost Management**: Choose networks with appropriate transaction costs for your use case
+
+## Prerequisites
+
+Before configuring self-anchoring, ensure you have:
+
+1. **A running Ceramic One node** - See [Installation](../usage/installation)
+2. **Access to an EVM RPC endpoint** - For submitting transactions and verifying anchor proofs
+3. **A funded wallet** - The wallet must have sufficient funds for transaction fees on your chosen network
+4. **A deployed anchor contract** - See [Deploying the Anchor Contract](./evm-configuration#deploying-the-anchor-contract)
+
+## Quick Start
+
+```bash
+ceramic-one daemon \
+ --evm-rpc-url https://rpc.yournetwork.com \
+ --evm-private-key your-private-key-hex \
+ --evm-chain-id 1 \
+ --evm-contract-address 0x... \
+ --anchor-interval 3600
+```
+
+Or use environment variables:
+
+```bash
+export CERAMIC_ONE_EVM_RPC_URL=https://rpc.yournetwork.com
+export CERAMIC_ONE_EVM_PRIVATE_KEY=your-private-key-hex
+export CERAMIC_ONE_EVM_CHAIN_ID=1
+export CERAMIC_ONE_EVM_CONTRACT_ADDRESS=0x...
+export CERAMIC_ONE_ANCHOR_INTERVAL=3600
+
+ceramic-one daemon
+```
+
+See [EVM Configuration](./evm-configuration) for detailed setup instructions and all available options.
+
+## Configuration Summary
+
+| Option | Environment Variable | Purpose |
+|--------|---------------------|---------|
+| `--evm-rpc-url` | `CERAMIC_ONE_EVM_RPC_URL` | Submit anchor transactions and verify proofs |
+| `--evm-private-key` | `CERAMIC_ONE_EVM_PRIVATE_KEY` | Sign anchor transactions |
+| `--evm-chain-id` | `CERAMIC_ONE_EVM_CHAIN_ID` | Target blockchain network |
+| `--evm-contract-address` | `CERAMIC_ONE_EVM_CONTRACT_ADDRESS` | Anchor contract address |
+| `--additional-chain-rpc-urls` | `CERAMIC_ONE_ADDITIONAL_CHAIN_RPC_URLS` | Validate anchors from other chains (optional) |
+
+## How It Works
+
+1. **Event Collection**: Ceramic One collects pending events that need anchoring
+2. **Merkle Tree Construction**: Events are batched and organized into a Merkle tree
+3. **Blockchain Transaction**: The Merkle root is written to the anchor contract on your chosen EVM chain
+4. **Confirmation**: After the configured number of block confirmations, anchors are considered final
+5. **Proof Verification**: The node verifies anchor proofs using the configured RPC endpoints
+6. **Proof Distribution**: Anchor proofs are distributed to relevant streams
+
+## Next Steps
+
+- [EVM Configuration](./evm-configuration) - Detailed configuration options for EVM anchoring
+- [Concepts](../concepts) - Learn more about Ceramic's event streaming architecture
diff --git a/docs/protocol/ceramic-one/concepts.mdx b/docs/protocol/ceramic-one/concepts.mdx
index 3bbfb81d..d460d208 100644
--- a/docs/protocol/ceramic-one/concepts.mdx
+++ b/docs/protocol/ceramic-one/concepts.mdx
@@ -7,10 +7,8 @@ Ceramic combines a powerful event streaming platform with the open access and ve
modern, decentralized web. This page provides a brief overview of the core architecture, with a
focus on how the high-level components of the protocol fit together.
-You may find this page helpful even if you are working with a database or other application built on
-top of the core protocol, such as ComposeDB, although there may be some details here that are less
-relevant to your needs, and some things that are better covered by the documentation of the tool
-you're interacting with directly.
+This page provides essential background for understanding how Ceramic works, from event streaming
+fundamentals to the data pipeline that transforms raw events into queryable state.
## Event Streaming
@@ -195,15 +193,9 @@ future update events. Updates to documents take the form of events containing JS
which encode an operation to perform on the current state, for example, set the field `"name"` to
the value `"Ada Lovelace"`.
-Databases built on Ceramic like ComposeDB extend the idea of JSON-based documents and defines a shared
-vocabulary of "models." A model specifies the fields, data types, and relationships to other models
-that are supported for a specific kind of data. For example, a simple social media post model might
-include a text status field, a link to the author's account, and optional links to attached media
-objects.
-
-In Ceramic, models are associated with the model instance document stream type. Ceramic is, however,
-designed with the flexibility to process events for various stream types, with models representing
-just one stream category. Therefore, it is important to note that the data pipeline steps outlined
-above will vary based on different stream types. While Ceramic currently only supports model and
-model instance document streams, support for new stream types will be introduced by implementing
-their pipeline aggregation steps in the near future.
+A model specifies the fields, data types, and relationships to other models that are supported for a
+specific kind of data. For example, a simple social media post model might include a text status
+field, a link to the author's account, and optional links to attached media objects.
+
+In Ceramic, models are associated with the model instance document stream type. The data pipeline
+steps outlined above process model and model instance document streams.
diff --git a/docs/protocol/ceramic-one/usage/installation.mdx b/docs/protocol/ceramic-one/usage/installation.mdx
index 2e4ad2fe..9b2bf361 100644
--- a/docs/protocol/ceramic-one/usage/installation.mdx
+++ b/docs/protocol/ceramic-one/usage/installation.mdx
@@ -11,8 +11,8 @@ https://github.com/ceramicstudio/ceramic-sdk.
The SDK is written in TypeScript and published to NPM as `@ceramic-sdk`, with a handful of useful
sub-packages:
-- `@ceramic-sdk/events`: Utilities for creating and signing events that comply to the
- [Ceramic Event Log specifications](https://developers.ceramic.network/docs/protocol/js-ceramic/streams/event-log)
+- `@ceramic-sdk/events`: Utilities for creating and signing events that comply to Ceramic Event Log
+ specifications
- `@ceramic-sdk/http-client`: A simple client for
[ceramic-one's](https://github.com/ceramicnetwork/rust-ceramic) HTTP APIs.
- `@ceramic-sdk/identifiers`: A handful of useful types for identifying streams and events.
diff --git a/docs/protocol/js-ceramic/accounts/accounts-index.md b/docs/protocol/js-ceramic/accounts/accounts-index.md
deleted file mode 100644
index d3fd2200..00000000
--- a/docs/protocol/js-ceramic/accounts/accounts-index.md
+++ /dev/null
@@ -1,20 +0,0 @@
-# Accounts
----
-
-User-owned Ceramic accounts
-
-### Overview
-
-User owned data requires an account model that is both core to the protocol and general enough to support the wide diversity of possible account models and real world scenarios. Accounts are identified by Decentralized Identifiers, a general and extensible method to represent unique account strings, resolve public keys, and other account info or key material. Object-Capabilities are used to permission and authorize stream writes from one account to another, this may include session keys, applications and managing organization access.
-
-### [Decentralized Identifiers](decentralized-identifiers.md)
-
-Decentralized Identifiers (DIDs) are used to represent accounts. DIDs are identifiers that enable verifiable, decentralized digital identities. They require no centralized party or registry and are extremely extensible, allowing a variety of implementations and account models to exist.
-
-### [Authorizations](authorizations.md)
-
-Authorizations allow one account to delegate stream access to another account. While the current model is simple and minimal, it is descriptive enough to follow the rule of least privilege and limit the access that is delegated to another account.
-
-### [Object Capabilities](object-capabilities.md)
-
-Object Capabilities or CACAO are the technical feature and implementation that enables support for permissions and a general and powerful capability-based authorization system over streams.
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/accounts/authorizations.md b/docs/protocol/js-ceramic/accounts/authorizations.md
deleted file mode 100644
index 7a846b1d..00000000
--- a/docs/protocol/js-ceramic/accounts/authorizations.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# Authorizations
----
-
-Authorization is the act of delegating access to a stream to an account that is different than its owner. As a best practice, when granting authorizations to another account you want to follow the rule of least privilege and only authorize that delegate's temporary key to write the minimally needed data to Ceramic.
-
-## Scopes
-
----
-
-CACAO and Ceramic support a basic way to describe the resources and actions an authorization includes. The resource parameter is an array of strings. In Ceramic those strings are StreamIDs or model StreamIDs. The implied action is write access, as read access is not authorized in any way at the protocol level. Read access would require an encryption protocol, as streams are public, and is out of scope for now.
-
-:::note
- In the future, we expect the ability to specify more granular authorizations based on actions (write, delete, create, update etc) and resources.
-:::
-
-### Streams
-
-For example, to authorize an account to write to only two specific streams, you would specify the streamIds as resources in the CACAO as follows:
-
-```bash
-[ "ceramic://kjzl6cwe1jw14bby1eybtqjr1w5l8xysitwmd34i8huccr7lk8g6xrt2l1c1ngn", "ceramic://kjzl6cwe1jw1476bbp2a0lg8gcmk9zj1xjanpg6dooc3golyb2fnmwmg0p6ane3"]
-```
-
-### Models
-
-The mostly commonly used pattern is to specify authorizations by model streamIds. `model` is a property that can be defined in a streams init event. When specified and used with CACAO it allows a DID and key the ability to write to all streams with this specific model value for that user.
-
-:::note
- Ceramic will likely support other keys and values in streams beyond `model` for authorizations in the future.
-:::
-
-Models at the moment are primarily used as higher level concept built on top of Ceramic. A set of models will typically describe the entire write data-model of an application, making it a logical way for a user to authorize an application to write to all streams that is needed for that application.
-
-For example, a simple social application with a user profile and posts would have two corresponding models, a profile model and a post model. The CACAO would have the resources specified by an array of both model streamIds, shown below. This would allow a DID with this CACAO to create and write to any stream with these models. Allowing it to create as many posts as necessary.
-
-Resources defined by model streamID are formatted as `ceramic://*?model=` and would be defined as follows for the prior example.
-
-```bash
-[ "ceramic://*?model=kjzl6hvfrbw6c7keo17n66rxyo21nqqaa9lh491jz16od43nokz7ksfcvzi6bwc", "ceramic://*?model=kjzl6hvfrbw6c99mdfpjx1z3fue7sesgua6gsl1vu97229lq56344zu9bawnf96"]
-```
-
-### Wildcard
-
-Lastly a wildcard for all resources is supported. For security reasons, wildcard will be deprecated in the future and is only included here for completeness.
-
-```bash
-[ "ceramic://*" ]
-```
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/accounts/decentralized-identifiers.md b/docs/protocol/js-ceramic/accounts/decentralized-identifiers.md
deleted file mode 100644
index 342ecd5e..00000000
--- a/docs/protocol/js-ceramic/accounts/decentralized-identifiers.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Identifiers
----
-
-Ceramic streams rely on an account model to authenticate and authorize updates to a stream. A fully realized vision of user owned data includes the use of public key cryptography and the ability to sign data with a public-private key-pair controlled by a user. But key pairs alone are often not user friendly nor sufficient and don't fully represent the range of real world scenarios.
-
-## Decentralized Identifiers (DIDs)
-
----
-
-Ceramic uses [Decentralized Identifiers (DIDs)](https://w3c.github.io/did-core/) to represent accounts. DIDs are identifiers that enable verifiable, decentralized digital identities. They require no centralized party or registry and are extremely extensible, allowing a variety of implementations and account models to exist.
-
-DID methods are specific implementations of the DID standard that define an identifier namespace along with how to resolve its DID document, which typically stores public keys for signing and encryption. The ability to resolve public keys from identifiers allows anyone to verify a signature for a DID.
-
-## Supported Methods
-
----
-
-At this time, the following DID methods can be used with Ceramic:
-
-### PKH DID
-
-**PKH DID Method**: A DID method that natively supports blockchain accounts. DID documents are statically generated from a blockchain account, allowing blockchain accounts to sign, authorize and authenticate in DID based environments. PKH DID is the primary and recommended method in Ceramic. [did:pkh Method Specification](https://github.com/w3c-ccg/did-pkh/blob/main/did-pkh-method-draft.md)
-
-```bash
-did:pkh:eip155:1:0xb9c5714089478a327f09197987f16f9e5d936e8a
-```
-
-### Key DID
-
-**Key DID Method**: A DID method that expands a cryptographic public key into a DID Document, with support for Ed25519 and Secp256k1. Key DIDs are typically not used in long lived environments. [did:key Method Specification](https://w3c-ccg.github.io/did-method-key/)
-
-```bash
-did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK
-```
-
diff --git a/docs/protocol/js-ceramic/accounts/object-capabilities.md b/docs/protocol/js-ceramic/accounts/object-capabilities.md
deleted file mode 100644
index 704685ba..00000000
--- a/docs/protocol/js-ceramic/accounts/object-capabilities.md
+++ /dev/null
@@ -1,126 +0,0 @@
-# Object Capabilities
----
-
-Ceramic streams support [CACAO](https://chainagnostic.org/CAIPs/caip-74), allowing a basic but powerful capability-based authorization system to exist. CACAO, or "chain agnostic object capabilities", are composable, transferable and verifiable containers for authorizations and encoded in IPLD. For the full CACAO specification and more examples, reference [CAIP-74: CACAO - Chain Agnostic CApability Object](https://chainagnostic.org/CAIPs/caip-74).
-
-## Approach
-
----
-
-Object capability-based authorization systems, of which CACAO is an implementation, are a natural way to represent authorizations in open and distributed systems. Object capabilities require little coordination, are only stored by parties that care about a particular capability, and are self-verifiable.
-
-Contrast this to popular authorization models like access controls lists (ACLs), which often rely on the ability to maintain an accurate, agreed-upon, and up to date list of authorizations. ACLs are simple and sufficient when you can rely on a single authoritative source to maintain the list, but quickly become difficult in a distributed setting. Maintaining a list amongst many unknown participants becomes a difficult consensus problem and often very costly at scale, requiring lots of upfront and continuous coordination.
-
-## Usage
-
----
-
-CACAO enables the ability for one account to authorize another account to construct signatures over limited data on their behalf, or in this case write to a Ceramic stream.
-
-### Using blockchain accounts
-
-When combined with ["Sign-in with X"](https://chainagnostic.org/CAIPs/caip-122), CACAO unlocks the ability for blockchain accounts to authorize Ceramic accounts (DIDs) to sign data on their behalf.
-
-This frequently used pattern in Ceramic greatly increases the the usability of user-owned data and public-key cryptography. Thanks to the adoption of blockchain based systems, many users now have the ability to easily sign data in web-based environments using their wallet and blockchain account.
-
-### Authorizing sessions
-
-Data-centric systems like Ceramic often have more frequent writes than a blockchain system, so it can be impractical to sign every Ceramic stream event in a blockchain wallet. Instead with the use of CACAO and "Sign-in with X" many writes can be made by way of a temporary key and DID authorized with a CACAO. Allowing a user to only sign once with a blockchain based account and wallet, then continue to sign many payloads for an authorized amount of time (session).
-
-:::note
-In the future, we expect the ability to model the authorizations for more complex environments and structures including full organizations.
-:::
-
-## Specification
-
----
-
-Support for object capabilities in the core Ceramic protocol is described below. Events in streams are signed payloads formatted in IPLD using DAGJWS (DAG-JOSE), as [described here](../streams/event-log.md). This describes how this is extended to construct a valid signed payload using CACAO in DAGJWS, by example of first constructing a JWS with CACAO. A JWS with CACAO can then be directly encoded with DAG-JOSE after.
-
-### JWS with CACAO
-
-JWS CACAO support includes adding a `cap` parameter to the JWS Protected Header and specifying the correct `kid` parameter string. Here is an example protected JWS header with CACAO:
-
-```bash
-{
- "alg": "EdDSA",
- "cap": "ipfs://bafyreidoaclgf2ptbvflwalfrr6d4iqehkzyidwbzaouprdbjjfb4yim6q"
- "kid": "did:key:z6MkrBdNdwUPnXDVD1DCxedzVVBpaGi8aSmoXFAeKNgtAer8#z6MkrBdNdwUPnXDVD1DCxedzVVBpaGi8aSmoXFAeKNgtAer8"
- }
-```
-
-Where:
-
-- `alg` - identifies the cryptographic algorithm used to secure the JWS
-- `cap` - maps to a URI string, expected to be an IPLD CID resolvable to a CACAO object
-- `kid` - references the key used to secure the JWS. In the scope here this is expected to be a DID with reference to any key in the DID verification methods. The parameter MUST match the `aud` target of the CACAO object for both the CACAO and corresponding signature to be valid together.
-
-
-Since `cap` is currently not a registered header parameter name in the IANA "JSON Web Signature and Encryption Header Parameters" registry, we treat this as a "Private Header Parameter Name" for now with additional meaning provided by the CACAO for implementations that choose to use this specification.
-
-This means that ignoring the `cap` header during validation will still result in a valid JWS payload by the key defined in the `kid`. It just has no additional meaning by what is defined in the CACAO. The `cap` header parameter could also have support added as an extension by using the `crit` (Critical) Header Parameter in the JWS, but there is little reason to invalidate the JWS based on a consumer not understanding the `cap` header given it is still valid.
-
-### DagJWS with CACAO
-
-#### Construction
-
-Given JWS with CACAO described in prior section, follow the DAG-JOSE specification and implementations for the steps to construct a given JWS with CACAO header and payload into a DagJWS. DagJWS is very similar to any JWS, except that the payload is a base64url encoded IPLD CID that references the JSON object payload.
-
-```bash
-{
- cid: "bagcqcera2mews3mbbzs...quxj4bes7fujkms4kxhvqem2a",
- value: {
- jws: {
- link: CID("bafyreidkjgg6bi4juwx...lb2usana7jvnmtyjb4xbgwl6e"),
- payload: "AXESIGpJjeCjiaWv...LKw6pIDQfTVrJ4SHlwmsvx",
- signatures: [
- {
- protected: "eyJhbGciOiJFZERTQSIsImNh...GU2djZEpLTmhYSDl4Rm9rdEFKaXlIQiJ9"
- signature: "6usTYvu5KN0LFTQsWE9U-tqx...h60EgfvjL_rlAW7_tnQUl84sQyogpkLAQ"
- }
- ]
- }
- }
-}
-```
-
-#### Verification
-
-The following algorithm describes the steps required to determine if a given DagJWS with CACAO is valid:
-
-1. Follow [DAG-JOSE specification](https://ipld.io/specs/codecs/dag-jose/spec/) to transform a given DagJWS into a JWS.
-2. Follow JWS specifications to determine if the given JWS is valid. Verifying that the given signature paired with `alg` and `kid` in the protected header is valid over the given payload. If invalid, an error MUST be raised.
-3. Resolve the given URI in `cap` parameter of the projected JWS header to a CACAO JSON object. Follow the [CAIP-74 CACAO](https://chainagnostic.org/CAIPs/caip-74) specification to determine if the given CACAO is valid. If invalid, an error MUST be raised.
-4. Ensure that the `aud` parameter of the CACAO payload is the same target as the `kid` parameter in the JWS protected header. If they do not match, an error MUST be raised.
-
-#### Example
-
-Example IPLD dag-jose encoded block, strings abbreviated.
-
-```bash
-{
- cid: "bagcqcera2mews3mbbzs...quxj4bes7fujkms4kxhvqem2a",
- value: {
- jws: {
- link: CID("bafyreidkjgg6bi4juwx...lb2usana7jvnmtyjb4xbgwl6e"),
- payload: "AXESIGpJjeCjiaWv...LKw6pIDQfTVrJ4SHlwmsvx",
- signatures: [
- {
- protected: "eyJhbGciOiJFZERTQSIsImNh...GU2djZEpLTmhYSDl4Rm9rdEFKaXlIQiJ9"
- signature: "6usTYvu5KN0LFTQsWE9U-tqx...h60EgfvjL_rlAW7_tnQUl84sQyogpkLAQ"
- }
- ]
- }
- }
-}
-```
-
-If `block.value.jws.signatures[0].protected` is decoded, you would see the following object, a JWS protected header as described above:
-
-```bash
-{
- "alg": "EdDSA",
- "cap": "ipfs://bafyreidoaclgf...yidwbzaouprdbjjfb4yim6q",
- "kid": "did:key:z6Mkq2ZyjGV54ev...hXH9xFoktAJiyHB#z6Mkq2ZyjGV54ev...hXH9xFoktAJiyHB"
-}
-```
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/api.md b/docs/protocol/js-ceramic/api.md
deleted file mode 100644
index dcd07cd7..00000000
--- a/docs/protocol/js-ceramic/api.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Ceramic API
----
-The new and improved Ceramic API is a work in progress. We will update this page when it's available. In the meantime, have a look at the [HTTP API](./guides/ceramic-clients/javascript-clients/ceramic-http.md) that's implemented by the current JS Ceramic implementation.
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/did-jsonrpc.md b/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/did-jsonrpc.md
deleted file mode 100644
index 021e84b7..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/did-jsonrpc.md
+++ /dev/null
@@ -1,277 +0,0 @@
-# DID JSON-RPC client
-
----
-
-DID JSON-RPC client provides a simple JS API for interacting with Ceramic accounts.
-
-## Things to know
-
----
-
-- Provides the DID object, which must be authenticated, then mounted on the Ceramic object to perform transactions.
-- For Ceramic nodes, the DID client acts as a way to resolve and verify transaction signatures
-- For Ceramic clients, the DID client serves as a way to create an account, authenticate, sign, encrypt
-- If your project requires transactions, you **need** to install this package or one that offers similar EIP-2844 API support.
-- The DID client library can be used in both browser and Node.js environments.
-- It supports any DID wallet provider that adheres to the [EIP-2844](https://eips.ethereum.org/EIPS/eip-2844) interface.
-- Communicating between a Ceramic client and any account provider.
-- Ceramic does not work without a DID client, as it is how all participants are identified and how transactions and messages are signed and verified.
-
-## Installation
-
-```sh
-npm install dids
-```
-
-The `DID` class provides the interface on top of underlying account libraries. The next step is to set up your account system, which requires you to make some important decisions about your account model and approach to key management. This process consists of three steps: choosing your account types, installing a provider, and installing resolver(s).
-
-## Choose your account types
-
-Choosing an account type can significantly impact your users' identity and data interoperability. For example, some account types are fixed to a single public key (Key DID, PHK DID), so the data is siloed to that key. In contrast, others (3ID DID) have mutable key management schemes that can support multiple authorized signing keys and works cross-chain with blockchain wallets. Visit each account to learn more about its capabilities.
-
-### [PKH DID](../../../accounts/decentralized-identifiers.md#pkh-did)
-
-Based on Sign-in with Ethereum, or similar standards in other blockchain ecosystems. Good for users + most popular. Relies on existing wallet infrastructure.
-
-### [Key DID](../../../accounts/decentralized-identifiers.md#key-did)
-
-Simple, self-contained DID method.
-
-## Install account resolvers
-
-The next step is to install resolver libraries for all account types that you may need to read and verify data (signatures). This includes _at least_ the resolver for the provider or wallet chosen in the previous step. However, most projects install all resolvers to be safe:
-
-| Account | Resolver libraries | Maintainer |
-| ------- | ----------------------------------------------------------------------------- | ---------- |
-| Key DID | [`key-did-resolver`](./key-did.md#key-did-resolver) | 3Box Labs |
-
-
-
-## Install account providers
-
-Install providers to manage accounts and sign transactions. Once you have chosen one or more account types, you'll need to install the providers for these account types. These will enable the client-side creation and use of accounts within your application. If your application uses Ceramic in a read-only manner without transactions, you do not need to install a provider.
-
-### Using web wallets
-
-However, the providers listed above are low-level, run locally, and burden developers with UX issues related to secret key management and transaction signing. Instead of using a local provider, you can alternatively use a wallet system. Wallets wrap providers with additional user experience features related to signing and key management and can be used in place of a provider. The benefit is multiple applications can access the same wallet and key management system, so users have a continuous experience between applications.
-
-
-
-### Create your own wallet
-
-One option is installing and setting up one or more account providers that run locally. Note that these local signers have different wallet support
-
-| Account | Supported Key Types | Provider libraries |
-| ------- | ------------------- | ---------------------------------------------------------------- |
-| Key DID | Ed25519 | [`key-did-provider-ed25519`](./key-did#key-did-provider-ed25519) |
-| Key DID | Secp256k1 | [`key-did-provider-secp256k1`](./key-did#key-did-provider-secp256k1) |
-
-
-
-Note that NFT DID and Safe DID do not have a signer because they are compatible with all other providers.
-
-## Setup your project
-
-You should have installed DID.js and set up your account system, including authentication to perform transactions. When you include everything in your project, it should look like this. Note that the exact code will vary by your setup, including provider and wallet. Consult your provider's documentation for authentication specifics.
-
-```ts
-// Import DID client
-import { DID } from 'dids'
-
-// Add account system
-import { Ed25519Provider } from 'key-did-provider-ed25519'
-import { getResolver } from 'key-did-resolver'
-
-// Connect to a Ceramic node
-const API_URL = 'https://your-ceramic-node.com'
-
-// Create the Ceramic object
-const ceramic = new CeramicClient(API_URL)
-
-// ↑ With this setup, you can perform read-only queries.
-// ↓ Continue to authenticate the account and perform transactions.
-
-async function authenticateCeramic(seed) {
- // Activate the account by somehow getting its seed.
- // See further down this page for more details on
- // seed format, generation, and key management.
- const provider = new Ed25519Provider(seed)
- // Create the DID object
- const did = new DID({ provider, resolver: getResolver() })
- // Authenticate with the provider
- await did.authenticate()
- // Mount the DID object to your Ceramic object
- ceramic.did = did
-}
-```
-
-## Common use-cases
-
-### Authenticate the user
-
- :::caution
-
- This will flow will vary slightly depending on which account provider library you use. Please see the documentation specific to your provider library.
-:::
-
-```ts
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { DID } from 'dids'
-import { Ed25519Provider } from 'key-did-provider-ed25519'
-import { getResolver } from 'key-did-resolver'
-
-// `seed` must be a 32-byte long Uint8Array
-async function createJWS(seed) {
- const provider = new Ed25519Provider(seed)
- const did = new DID({ provider, resolver: getResolver() })
- // Authenticate the DID with the provider
- await did.authenticate()
- // This will throw an error if the DID instance is not authenticated
- const jws = await did.createJWS({ hello: 'world' })
-}
-```
-
-### Enable Ceramic transactions
-
-```ts
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { DID } from 'dids'
-import { Ed25519Provider } from 'key-did-provider-ed25519'
-import { getResolver } from 'key-did-resolver'
-
-const ceramic = new CeramicClient()
-
-// `seed` must be a 32-byte long Uint8Array
-async function authenticateCeramic(seed) {
- const provider = new Ed25519Provider(seed)
- const did = new DID({ provider, resolver: getResolver() })
- // Authenticate the DID with the provider
- await did.authenticate()
- // The Ceramic client can create and update streams using the authenticated DID
- ceramic.did = did
-}
-```
-
-### Resolve a DID document
-
-```ts
-import { DID } from 'dids'
-import { getResolver } from 'key-did-resolver'
-
-// See https://github.com/decentralized-identity/did-resolver
-const did = new DID({ resolver: getResolver() })
-
-// Resolve a DID document
-await did.resolve('did:key:...')
-```
-
-### Store signed data on IPFS using DagJWS
-
-The DagJWS functionality of the DID library can be used in conjunction with IPFS.
-
-```ts
-const payload = { some: 'data' }
-
-// sign the payload as dag-jose
-const { jws, linkedBlock } = await did.createDagJWS(payload)
-
-// put the JWS into the ipfs dag
-const jwsCid = await ipfs.dag.put(jws, {
- format: 'dag-jose',
- hashAlg: 'sha2-256',
-})
-
-// put the payload into the ipfs dag
-const block = await ipfs.block.put(linkedBlock, { cid: jws.link })
-
-// get the value of the payload using the payload cid
-console.log((await ipfs.dag.get(jws.link)).value)
-// output:
-// > { some: 'data' }
-
-// alternatively get it using the ipld path from the JWS cid
-console.log((await ipfs.dag.get(jwsCid, { path: '/link' })).value)
-// output:
-// > { some: 'data' }
-
-// get the jws from the dag
-console.log((await ipfs.dag.get(jwsCid)).value)
-// output:
-// > {
-// > payload: 'AXESINDmZIeFXbbpBQWH1bXt7F2Ysg03pRcvzsvSc7vMNurc',
-// > signatures: [
-// > {
-// > protected: 'eyJraWQiOiJkaWQ6Mzp1bmRlZmluZWQ_dmVyc2lvbj0wI3NpZ25pbmciLCJhbGciOiJFUzI1NksifQ',
-// > signature: 'pNz3i10YMlv-BiVfqBbHvHQp5NH3x4TAHQ5oqSmNBUx1DH_MONa_VBZSP2o9r9epDdbRRBLQjrIeigdDWoXrBQ'
-// > }
-// > ],
-// > link: CID(bafyreigq4zsipbk5w3uqkbmh2w2633c5tcza2n5fc4x45s6soo54ynxk3q)
-// > }
-```
-
-##### How it Works
-
-As can be observed above, the createDagJWS method takes the payload, encodes it using dag-cbor, and computes its CID. It then uses this CID as the payload of the JWS that is then signed. The JWS that was just created can be put into ipfs using the dag-jose codec. Returned is also the encoded block of the payload. This can be put into ipfs using ipfs.block.put. Alternatively, ipfs.dag.put(payload) would have the same effect.
-
-### Store encrypted data on IPFS with DagJWE
-
-The DagJWE functionality allows us to encrypt IPLD data to one or multiple DIDs. The resulting JWE object can then be put into ipfs using the dag-jose codec. A user that is authenticated can at a later point decrypt this object.
-
-```ts
-const cleartext = { some: 'data', coolLink: new CID('bafyqacnbmrqxgzdgdeaui') }
-
-// encrypt the cleartext object
-const jwe = await did.createDagJWE(cleartext, [
- 'did:3:bafy89h4f9...',
- 'did:key:za234...',
-])
-
-// put the JWE into the ipfs dag
-const jweCid = await ipfs.dag.put(jwe, {
- format: 'dag-jose',
- hashAlg: 'sha2-256',
-})
-
-// get the jwe from the dag and decrypt it
-const dagJWE = await ipfs.dag.get(jweCid)
-console.log(await did.decryptDagJWE(dagJWE))
-// output:
-// > { some: 'data' }
-```
-
-
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/did-session.md b/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/did-session.md
deleted file mode 100644
index 3ff14417..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/did-session.md
+++ /dev/null
@@ -1,387 +0,0 @@
-# Module: did-session
-
-Manages user account DIDs in web based environments.
-
-## Purpose
-
-Manages, creates and authorizes a DID session key for a user. Returns an authenticated DIDs instance
-to be used in other Ceramic libraries. Supports did:pkh for blockchain accounts with Sign-In with
-Ethereum and CACAO for authorization.
-
-## Installation
-
-```sh
-npm install did-session
-```
-
-## Usage
-
-Authorize and use DIDs where needed. Import the AuthMethod you need, Ethereum accounts are used here for example.
-
-```ts
-import { DIDSession } from 'did-session'
-import { EthereumWebAuth, getAccountId } from '@didtools/pkh-ethereum'
-
-const ethProvider = // import/get your web3 eth provider
-const addresses = await ethProvider.enable()
-const accountId = await getAccountId(ethProvider, addresses[0])
-const authMethod = await EthereumWebAuth.getAuthMethod(ethProvider, accountId)
-
-const session = await DIDSession.authorize(authMethod, { resources: [...]})
-
-// Uses DIDs in ceramic, combosedb & glaze libraries, ie
-const ceramic = new CeramicClient()
-ceramic.did = session.did
-
-// pass ceramic instance where needed
-```
-
-You can serialize a session to store for later and then re-initialize. Currently sessions are valid
-for 1 day by default.
-
-```ts
-// Create session as above, store for later
-const session = await DIDSession.authorize(authMethod, { resources: [...]})
-const sessionString = session.serialize()
-
-// write/save session string where you want (ie localstorage)
-// ...
-
-// Later re initialize session
-const session2 = await DIDSession.fromSession(sessionString)
-const ceramic = new CeramicClient()
-ceramic.did = session2.did
-```
-
-Additional helper functions are available to help you manage a session lifecycle and the user experience.
-
-```ts
-// Check if authorized or created from existing session string
-didsession.hasSession
-
-// Check if session expired
-didsession.isExpired
-
-// Get resources session is authorized for
-didsession.authorizations
-
-// Check number of seconds till expiration, may want to re auth user at a time before expiration
-didsession.expiresInSecs
-```
-
-## Configuration
-
-The resources your app needs to write access to must be passed during authorization. Resources are an array
-of Model Stream Ids or Streams Ids. Typically you will just pass resources from `@composedb` libraries as
-you will already manage your Composites and Models there. For example:
-
-```ts
-import { ComposeClient } from '@composedb/client'
-
-//... Reference above and `@composedb` docs for additional configuration here
-
-const client = new ComposeClient({ ceramic, definition })
-const resources = client.resources
-const session = await DIDSession.authorize(authMethod, { resources })
-client.setDID(session.did)
-```
-
-By default a session will expire in 1 day. You can change this time by passing the `expiresInSecs` option to
-indicate how many seconds from the current time you want this session to expire.
-
-```ts
-const oneWeek = 60 * 60 * 24 * 7
-const session = await DIDSession.authorize(authMethod, { resources: [...], expiresInSecs: oneWeek })
-```
-
-A domain/app name is used when making requests, by default in a browser based environment the library will use
-the domain name of your app. If you are using the library in a non web based environment you will need to pass
-the `domain` option otherwise an error will thrown.
-
-```ts
-const session = await DIDSession.authorize(authMethod, { resources: [...], domain: 'YourAppName' })
-```
-
-## Typical usage pattern
-
-A typical pattern is to store a serialized session in local storage and load on use if available. Then
-check that a session is still valid before making writes.
-
-**Warning:** LocalStorage is used for illustrative purposes here and may not be best for your app, as
-there is a number of known issues with storing secret material in browser storage. The session string
-allows anyone with access to that string to make writes for that user for the time and resources that
-session is valid for. How that session string is stored and managed is the responsibility of the application.
-
-```ts
-import { DIDSession } from 'did-session'
-import type { AuthMethod } from '@didtools/cacao'
-import { EthereumWebAuth, getAccountId } from '@didtools/pkh-ethereum'
-
-const ethProvider = // import/get your web3 eth provider
-const addresses = await ethProvider.enable()
-const accountId = await getAccountId(ethProvider, addresses[0])
-const authMethod = await EthereumWebAuth.getAuthMethod(ethProvider, accountId)
-
-const loadSession = async(authMethod: AuthMethod):Promise => {
- const sessionStr = localStorage.getItem('didsession')
- let session
-
- if (sessionStr) {
- session = await DIDSession.fromSession(sessionStr)
- }
-
- if (!session || (session.hasSession && session.isExpired)) {
- session = await DIDSession.authorize(authMethod, { resources: [...]})
- localStorage.setItem('didsession', session.serialize())
- }
-
- return session
-}
-
-const session = await loadSession(authMethod)
-const ceramic = new CeramicClient()
-ceramic.did = session.did
-
-// pass ceramic instance where needed, ie ceramic, composedb, glaze
-// ...
-
-// before ceramic writes, check if session is still valid, if expired, create new
-if (session.isExpired) {
- const session = loadSession(authMethod)
- ceramic.did = session.did
-}
-
-// continue to write
-```
-
-## Upgrading from `did-session@0.x.x` to `did-session@1.x.x`
-
-AuthProviders change to AuthMethod interfaces. Similarly you can import the auth libraries you need. How you configure and manage
-these AuthMethods may differ, but each will return an AuthMethod function to be used with did-session.
-
-```ts
-// Before with v0.x.x
-//...
-import { EthereumAuthProvider } from '@ceramicnetwork/blockchain-utils-linking'
-
-const ethProvider = // import/get your web3 eth provider
-const addresses = await ethProvider.enable()
-const authProvider = new EthereumAuthProvider(ethProvider, addresses[0])
-const session = new DIDSession({ authProvider })
-const did = await session.authorize()
-
-// Now did-session@1.0.0
-...
-import { EthereumWebAuth, getAccountId } from '@didtools/pkh-ethereum'
-
-const ethProvider = // import/get your web3 eth provider
-const addresses = await ethProvider.enable()
-const accountId = await getAccountId(ethProvider, addresses[0])
-const authMethod = await EthereumWebAuth.getAuthMethod(ethProvider, accountId)
-const session = await DIDSession.authorize(authMethod, { resources: [...]})
-const did = session.did
-```
-
-## Upgrading from `@glazed/did-session` to `did-session`
-
-`authorize` changes to a static method which returns a did-session instance and `getDID()` becomes a `did` getter. For example:
-
-```ts
-// Before @glazed/did-session
-const session = new DIDSession({ authProvider })
-const did = await session.authorize()
-
-// Now did-session
-const session = await DIDSession.authorize(authMethod, { resources: [...]})
-const did = session.did
-```
-
-Requesting resources are required now when authorizing, before wildcard (access all) was the default. You can continue to use
-wildcard by passing the following _ below. Wildcard is typically only used with `@glazed` libraries and/or tile documents and
-it is best to switch over when possible, as the wildcard option may be _ deprecated in the future. When using with
-composites/models you should request the minimum needed resources instead.
-
-```ts
-const session = await DIDSession.authorize(authMethod, { resources: [`ceramic://*`] })
-const did = session.did
-```
-
-# Class: DIDSession
-
-## Constructors
-
-### constructor
-
-• **new DIDSession**(`params`)
-
-#### Parameters
-
-| Name | Type |
-| :------- | :-------------- |
-| `params` | `SessionParams` |
-
-## Accessors
-
-### authorizations
-
-• `get` **authorizations**(): `string`[]
-
-Get the list of resources a session is authorized for
-
-#### Returns
-
-`string`[]
-
----
-
-### cacao
-
-• `get` **cacao**(): `Cacao`
-
-Get the session CACAO
-
-#### Returns
-
-`Cacao`
-
----
-
-### did
-
-• `get` **did**(): `DID`
-
-Get DID instance, if authorized
-
-#### Returns
-
-`DID`
-
----
-
-### expireInSecs
-
-• `get` **expireInSecs**(): `number`
-
-Number of seconds until a session expires
-
-#### Returns
-
-`number`
-
----
-
-### hasSession
-
-• `get` **hasSession**(): `boolean`
-
-#### Returns
-
-`boolean`
-
----
-
-### id
-
-• `get` **id**(): `string`
-
-DID string associated to the session instance. session.id == session.getDID().parent
-
-#### Returns
-
-`string`
-
----
-
-### isExpired
-
-• `get` **isExpired**(): `boolean`
-
-Determine if a session is expired or not
-
-#### Returns
-
-`boolean`
-
-## Methods
-
-### isAuthorized
-
-▸ **isAuthorized**(`resources?`): `boolean`
-
-Determine if session is available and optionally if authorized for given resources
-
-#### Parameters
-
-| Name | Type |
-| :----------- | :--------- |
-| `resources?` | `string`[] |
-
-#### Returns
-
-`boolean`
-
----
-
-### serialize
-
-▸ **serialize**(): `string`
-
-Serialize session into string, can store and initialize the same session again while valid
-
-#### Returns
-
-`string`
-
----
-
-### authorize
-
-▸ `Static` **authorize**(`authMethod`, `authOpts?`): `Promise`\<`DIDSession`\>
-
-Request authorization for session
-
-#### Parameters
-
-| Name | Type |
-| :----------- | :----------- |
-| `authMethod` | `AuthMethod` |
-| `authOpts` | `AuthOpts` |
-
-#### Returns
-
-`Promise`\<`DIDSession`\>
-
----
-
-### fromSession
-
-▸ `Static` **fromSession**(`session`): `Promise`\<`DIDSession`\>
-
-Initialize a session from a serialized session string
-
-#### Parameters
-
-| Name | Type |
-| :-------- | :------- |
-| `session` | `string` |
-
-#### Returns
-
-`Promise`\<`DIDSession`\>
-
----
-
-### initDID
-
-▸ `Static` **initDID**(`didKey`, `cacao`): `Promise`\<`DID`\>
-
-#### Parameters
-
-| Name | Type |
-| :------- | :------ |
-| `didKey` | `DID` |
-| `cacao` | `Cacao` |
-
-#### Returns
-
-`Promise`\<`DID`\>
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/key-did.md b/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/key-did.md
deleted file mode 100644
index e4e4480b..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/key-did.md
+++ /dev/null
@@ -1,99 +0,0 @@
-# Key DID libraries
-
----
-
-The Key DID libraries include the [resolver](#key-did-resolver) and [multiple providers](#key-did-providers) to provide a simple way for developers to get started using the [DID client](./did-jsonrpc.md) with the `did:key` method.
-
-## Available libraries
-
----
-
-- The [Key DID resolver](#key-did-resolver) allows a DID JSON-RPC client to resolve accounts using the `did:key` method
-- The [Key DID provider ED25519](#key-did-provider-ed25519) allows applications to create and use Key DID accounts for ED25519 keypairs. This provider supports encryption.
-- The [Key DID provider secp256k1](#key-did-provider-secp256k1) allows applications to create and use Key DID accounts for secp256k1 keypairs. This provider does not supports encryption.
-
-## Key DID resolver
-
----
-
-The `key-did-resolver` module is needed to resolve DID documents using the `did:key` method.
-
-### Installation
-
-```sh
-npm install key-did-resolver
-```
-
-### Usage
-
-```ts
-import { DID } from 'dids'
-import { getResolver } from 'key-did-resolver'
-
-async function resolveDID() {
- const did = new DID({ resolver: getResolver() })
- return await did.resolve('did:key:...')
-}
-```
-
-## Key DID providers
-
----
-
-Different libraries implement a provider for the `did:key` method based on different cryptographic primitives. These providers may have different possibilities, for example `key-did-provider-ed25519` supports encryption while `key-did-provider-secp256k1` does not.
-
-## Key DID provider ED25519
-
----
-
-This is the **recommended provider** for the `key:did` method in most cases.
-
-### Installation
-
-```sh
-npm install key-did-provider-ed25519
-```
-
-### Usage
-
-```ts
-import { DID } from 'dids'
-import { Ed25519Provider } from 'key-did-provider-ed25519'
-import { getResolver } from 'key-did-resolver'
-
-// `seed` must be a 32-byte long Uint8Array
-async function authenticateDID(seed) {
- const provider = new Ed25519Provider(seed)
- const did = new DID({ provider, resolver: getResolver() })
- await did.authenticate()
- return did
-}
-```
-
-## Key DID provider secp256k1
-
----
-
-This provider *does not support encryption*, so using methods such as `createJWE` on the `DID` instance is not supported.
-
-### Installation
-
-```sh
-npm install key-did-provider-secp256k1
-```
-
-### Usage
-
-```ts
-import { DID } from 'dids'
-import { Secp256k1Provider } from 'key-did-provider-secp256k1'
-import { getResolver } from 'key-did-resolver'
-
-// `seed` must be a 32-byte long Uint8Array
-async function authenticateDID(seed) {
- const provider = new Secp256k1Provider(seed)
- const did = new DID({ provider, resolver: getResolver() })
- await did.authenticate()
- return did
-}
-```
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/clients-overview.md b/docs/protocol/js-ceramic/guides/ceramic-clients/clients-overview.md
deleted file mode 100644
index 0062797c..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/clients-overview.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# Clients
-
-### Ceramic clients
-
-Ceramic clients are libraries that allow your application to communicate with a Ceramic node. Different clients may choose to implement different high-level, language-specific developer APIs. Before submitting requests to a Ceramic node, clients translate those API calls into the standard [Ceramic HTTP API](./javascript-clients/ceramic-http.md), which it uses to actually communicate with a Ceramic node.
-
-### Account clients
-
-Account clients are libraries that allow your application to recognize users, authenticate, and perform other account-related functionality such as signing transactions and encrypting data.
-
-## Available clients
-
----
-
-When building with Ceramic clients, be sure to install both a Ceramic client and an account client.
-
-### [**JS Ceramic HTTP Client →**](./javascript-clients/ceramic-http.md)
-
-The Ceramic JS HTTP client is a Ceramic client that can be used in browsers and Node.js environments to connect your application to a Ceramic node. It is actively maintained by 3Box Labs and supports the latest Ceramic features. This is the recommended Ceramic client to build with for most applications.
-
-
-
-### [DID JSON-RPC Client →](./authentication/did-jsonrpc.md)
-
-The DID JSON-RPC Client is an account client that provides a simple JS API for interacting with Ceramic accounts. It is actively maintained by 3Box Labs and supports all account types.
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/ceramic-http.md b/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/ceramic-http.md
deleted file mode 100644
index 246659a4..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/ceramic-http.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# Ceramic HTTP client
-
-The Ceramic HTTP client library can be used in browsers and Node.js to connect your application to a Ceramic node. It is actively maintained and supports the latest Ceramic features.
-
-
-
-## Things to know
-
-
-- The client is read-only by default, to enable transactions a [DID client](../authentication//did-jsonrpc.md) needs to be attached to the Ceramic client instance.
-- Ceramic streams can be identified by a **stream ID** or a **commit ID**. A **stream ID** is generated when creating the stream and can be used to load the **latest version** of the stream, while a **commit ID** represents a **specific version** of the stream.
-
-## Installation
-
-```bash
-npm install @ceramicnetwork/http-client
-```
-
-
-
-## Common use-cases
-
-### Load a single stream
-
-```ts
-// Import the client
-import { CeramicClient } from '@ceramicnetwork/http-client'
-
-// Connect to a Ceramic node
-const ceramic = new CeramicClient('https://your-ceramic-node.com')
-
-// The `id` argument can be a stream ID (to load the latest version)
-// or a commit ID (to load a specific version)
-async function load(id) {
- return await ceramic.loadStream(id)
-}
-```
-
-### Load multiple streams
-
-Rather than using the `loadStream` method multiple times with `Promise.all()` to load multiple streams at once, a **more efficient way for loading multiple streams** is to use the `multiQuery` method.
-
-```ts
-// Import the client
-import { CeramicClient } from '@ceramicnetwork/http-client'
-
-// Connect to a Ceramic node
-const ceramic = new CeramicClient('https://your-ceramic-node.com')
-
-// The `ids` argument can contain an array of stream IDs (to load the latest version)
-// or commit IDs (to load a specific version)
-async function loadMulti(ids = []) {
- const queries = ids.map((streamId) => ({ streamId }))
- // This will return an Object of stream ID keys to stream values
- return await ceramic.multiQuery(queries)
-}
-```
-
-### Enable transactions
-
-In order to create and update streams, the Ceramic client instance must be able to sign transaction payloads by using an authenticated DID instance. The [DID client documentation](../authentication//did-jsonrpc.md) describes the process of authenticating and attaching a DID instance to the Ceramic instance.
-
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/http-api.md b/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/http-api.md
deleted file mode 100644
index e9a08a88..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/http-api.md
+++ /dev/null
@@ -1,772 +0,0 @@
-# Ceramic HTTP API
-
----
-
-The Ceramic HTTP API is the standard lowest-level communication protocol between
-clients and nodes on the Ceramic network. It allows client applications to
-manually make REST HTTP requests to a remote Ceramic node to send transactons,
-retrieve data, and "pin" data to make it available.
-
-If you are building an application, you will usually interact with Ceramic using
-a client API, such as the
-[JS HTTP Client](./ceramic-http).
-
-## When to use the HTTP API
-
----
-
-The HTTP API is useful if you have a special use case where you directly want to
-make manual HTTP requests, or you want to implement an HTTP client in a new
-language.
-
-:::caution
-
-**Gateway mode**
-
- Some HTTP API methods will not be available if the Ceramic node you are using runs in *gateway mode*. This option disables writes, which is useful when exposing your node to the internet. **API methods that are disabled when running in gateway mode will be clearly marked.**
-
-:::
-
-## Streams API
-
-The `stream` endpoint is used to create new streams and load streams from the
-node using a StreamID or genesis content.
-
-### Loading a stream
-
-Load the state of a stream given its StreamID.
-
-=== "Request"
-
- ```
- GET /api/v0/streams/:streamid
- ```
-
- Here, `:streamid` should be replaced by the StreamID of the stream that is being requested.
-
-=== "Response" The response body contains the following fields:
-
- - `streamId` - the StreamID of the requested stream as string
- - `state` - the state of the requested stream as [StreamState](https://developers.ceramic.network/reference/typescript/interfaces/_ceramicnetwork_common.StreamState.html)
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/streams/kjzl6cwe1jw147r7878h32yazawcll6bxe5v92348cxitif6cota91qp68grbhm
- ```
-
-=== "Response"
-
- ```bash
- {
- "streamId": "kjzl6cwe1jw147r7878h32yazawcll6bxe5v92348cxitif6cota91qp68grbhm",
- "state": {
- "type": 0,
- "content": {
- "Ceramic": "pottery"
- },
- "metadata": {
- "schema": null,
- "controllers": [
- "did:key:z6MkfZ6S4NVVTEuts8o5xFzRMR8eC6Y1bngoBQNnXiCvhH8H"
- ]
- },
- "signature": 2,
- "anchorStatus": "PENDING",
- "log": [{
- "cid": "bagcqceramof2xi7kh6qblirzkbc7yulcjcticlcob6uvdrx3bexgks37ilva",
- "type": 0
- }],
- "anchorScheduledFor": "12/15/2020, 2:45:00 PM"
- }
- }
- ```
-
-### Creating a Stream
-
-:::note
-**Disabled in gateway mode**
-:::
-
-Create a new stream, or load a stream from its genesis content. The genesis
-content may be signed, or unsigned in some cases.
-
-=== "Request"
-
- ```bash
- POST /api/v0/streams
- ```
-
- #### Request body fields:
-
- - `type` - the type code of the StreamType to use. Type codes for the supported stream types can be found [in this table](https://github.com/ceramicnetwork/CIPs/blob/main/tables/streamtypes.csv).
- - `genesis` - the genesis content of the stream (will differ per StreamType)
- - `opts` - options for the stream creation, [CreateOpts](https://developers.ceramic.network/reference/typescript/interfaces/_ceramicnetwork_common.CreateOpts.html) (optional)
-
-=== "Response"
-
- The response body contains the following fields:
-
- - `streamId` - the StreamID of the requested stream as string
- - `state` - the state of the requested stream as [StreamState](https://developers.ceramic.network/reference/typescript/interfaces/_ceramicnetwork_common.StreamState.html)
-
-#### **Example**
-
-This example creates a `TileDocument` from an unsigned genesis commit. Note that
-if the content is defined for a `TileDocument` genesis commit, it needs to be
-signed.
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/streams -X POST -d '{
- "type": 0,
- "genesis": {
- "header": {
- "family": "test",
- "controllers": ["did:key:z6MkfZ6S4NVVTEuts8o5xFzRMR8eC6Y1bngoBQNnXiCvhH8H"]
- }
- }
- }' -H "Content-Type: application/json"
- ```
-
-=== "Response"
-
- ```bash
- {
- "streamId": "k2t6wyfsu4pg2qvoorchoj23e8hf3eiis4w7bucllxkmlk91sjgluuag5syphl",
- "state": {
- "type": 0,
- "content": {},
- "metadata": {
- "family": "test",
- "controllers": [
- "did:key:z6MkfZ6S4NVVTEuts8o5xFzRMR8eC6Y1bngoBQNnXiCvhH8H"
- ]
- },
- "signature": 0,
- "anchorStatus": "PENDING",
- "log": [
- {
- "cid": "bafyreihtdxfb6cpcvomm2c2elm3re2onqaix6frq4nbg45eaqszh5mifre",
- "type": 0
- }
- ],
- "anchorScheduledFor": "12/15/2020, 3:00:00 PM"
- }
- }
- ```
-
-## Multiqueries API
-
-The `multiqueries` endpoint enables querying multiple streams at once, as well
-as querying streams which are linked.
-
-### Querying multiple streams
-
-This endpoint allows you to query multiple StreamIDs. Along with each StreamID
-an array of paths can be passed. If any of the paths within the stream structure
-contains a Ceramic StreamID url (`ceramic://`), this linked stream
-will also be returned as part of the response.
-
-=== "Request"
-
- ```bash
- POST /api/v0/multiqueries
- ```
-
- #### Request body fields:
- - `queries` - an array of [MultiQuery](https://developers.ceramic.network/reference/typescript/interfaces/_ceramicnetwork_common.MultiQuery.html) objects
-
-=== "Response"
-
- The response body contains a map from StreamID strings to [StreamState](https://developers.ceramic.network/reference/typescript/interfaces/_ceramicnetwork_common.StreamState.html) objects.
-
-#### Example
-
-First let's create three streams to query using the Ceramic cli:
-
-=== "Request1"
-
- ```bash
- ceramic create tile --content '{ "Document": "A" }'
- ```
-
-=== "Response1"
-
- ```bash
- StreamID(kjzl6cwe1jw149rledowj0zi0icd7epi9y1m5tx4pardt1w6dzcxvr6bohi8ejc)
- {
- "Document": "A"
- }
- ```
-
-=== "Request2"
-
- ```bash
- ceramic create tile --content '{ "Document": "B" }'
- ```
-
-=== "Response2"
-
- ```bash
- StreamID(kjzl6cwe1jw147w3xz3xrcd37chh2rz4dfra3imtnsni385rfyqa3hbx42qwal0)
- {
- "Document": "B"
- }
- ```
-
-=== "Request3"
-
- ```bash
- ceramic create tile --content '{
- "Document": "C",
- "link": "ceramic://kjzl6cwe1jw149rledowj0zi0icd7epi9y1m5tx4pardt1w6dzcxvr6bohi8ejc"
- }'
- ```
-
-=== "Response3"
-
- ```bash
- StreamID(kjzl6cwe1jw14b54pb10voc4bqh73qyu8o6cfu66hoi3feidbbj81i5rohh7kgl)
- {
- "link": "ceramic://kjzl6cwe1jw149rledowj0zi0icd7epi9y1m5tx4pardt1w6dzcxvr6bohi8ejc",
- "Document": "C"
- }
- ```
-
-Now let's query them though the multiqueries endpoint:
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/multiqueries -X POST -d '{
- "queries": [{
- "streamId": "kjzl6cwe1jw14b54pb10voc4bqh73qyu8o6cfu66hoi3feidbbj81i5rohh7kgl",
- "paths": ["link"]
- }, {
- "streamId": "kjzl6cwe1jw147w3xz3xrcd37chh2rz4dfra3imtnsni385rfyqa3hbx42qwal0",
- "paths": []
- }]
- }' -H "Content-Type: application/json"
- ```
-
-=== "Response"
-
- ```bash
- {
- "kjzl6cwe1jw14b54pb10voc4bqh73qyu8o6cfu66hoi3feidbbj81i5rohh7kgl": {
- "type": 0,
- "content": {
- "link": "ceramic://kjzl6cwe1jw149rledowj0zi0icd7epi9y1m5tx4pardt1w6dzcxvr6bohi8ejc",
- "Document": "C"
- },
- "metadata": {
- "schema": null,
- "controllers": [
- "did:key:z6MkfZ6S4NVVTEuts8o5xFzRMR8eC6Y1bngoBQNnXiCvhH8H"
- ]
- },
- "signature": 2,
- "anchorStatus": "PENDING",
- "log": [
- {
- "cid": "bagcqcera5nx45nccxvjjyxsq3so5po77kpqzbfsydy6yflnkt6p5tnjvhbkq",
- "type": 0
- }
- ],
- "anchorScheduledFor": "12/30/2020, 1:45:00 PM"
- },
- "kjzl6cwe1jw149rledowj0zi0icd7epi9y1m5tx4pardt1w6dzcxvr6bohi8ejc": {
- "type": 0,
- "content": {
- "Document": "A"
- },
- "metadata": {
- "schema": null,
- "controllers": [
- "did:key:z6MkfZ6S4NVVTEuts8o5xFzRMR8eC6Y1bngoBQNnXiCvhH8H"
- ]
- },
- "signature": 2,
- "anchorStatus": "PENDING",
- "log": [
- {
- "cid": "bagcqcerawq5h7otlkdwuai7vhogqhs2aeaauwbu2aqclrh4iyu5h54qqogma",
- "type": 0
- }
- ],
- "anchorScheduledFor": "12/30/2020, 1:45:00 PM"
- },
- "kjzl6cwe1jw147w3xz3xrcd37chh2rz4dfra3imtnsni385rfyqa3hbx42qwal0": {
- "type": 0,
- "content": {
- "Document": "B"
- },
- "metadata": {
- "schema": null,
- "controllers": [
- "did:key:z6MkfZ6S4NVVTEuts8o5xFzRMR8eC6Y1bngoBQNnXiCvhH8H"
- ]
- },
- "signature": 2,
- "anchorStatus": "PENDING",
- "log": [
- {
- "cid": "bagcqceranecdjzw4xheudgkr2amjkntpktci2xv44d7v4hbft3ndpptid6ka",
- "type": 0
- }
- ],
- "anchorScheduledFor": "12/30/2020, 1:45:00 PM"
- }
- }
- ```
-
-## **Commits API**
-
-The `commits` endpoint provides lower level access to the data structure of a
-Ceramic stream. It is also the endpoint that is used in order to update a stream,
-by adding a new commit.
-
-### Getting all commits in a stream
-
-By calling GET on the _commits_ endpoint along with a StreamID gives you access
-to all of the commits of the given stream. This is useful if you want to inspect
-the stream history, or apply all of the commits to a Ceramic node that is not
-connected to the network.
-
-=== "Request"
-
- ```bash
- GET /api/v0/commits/:streamid
- ```
-
- Here, `:streamid` should be replaced by the string representation of the StreamID of the stream that is being requested.
-
-=== "Response"
-
- * `streamId` - the StreamID of the requested stream, string
- * `commits` - an array of commit objects
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/commits/kjzl6cwe1jw14ahmwunhk9yjwawac12tb52j1uj3b9a57eohmhycec8778p3syv
- ```
-
-=== "Response"
-
- ```bash
- {
- "streamId": "kjzl6cwe1jw14ahmwunhk9yjwawac12tb52j1uj3b9a57eohmhycec8778p3syv",
- "commits": [
- {
- "cid": "bagcqcera2faj5vik2giftqxftbngfndkci7x4z5vp3psrf4flcptgkz5xztq",
- "value": {
- "jws": {
- "payload": "AXESIAsUBpZMnue1yQ0BgXsjOFyN0cHq6AgspXnI7qGB54ux",
- "signatures": [
- {
- "signature": "16tBnfkXQU0yo-RZvfjWhm7pP-hIxJ5m-FIMHlCrRkpjbleoEcaC80Xt7qs_WZOlOCexznjow9aX4aZe51cYCQ",
- "protected": "eyJhbGciOiJFZERTQSIsImtpZCI6ImRpZDprZXk6ejZNa2ZaNlM0TlZWVEV1dHM4bzV4RnpSTVI4ZUM2WTFibmdvQlFOblhpQ3ZoSDhII3o2TWtmWjZTNE5WVlRFdXRzOG81eEZ6Uk1SOGVDNlkxYm5nb0JRTm5YaUN2aEg4SCJ9"
- }
- ],
- "link": "bafyreialcqdjmte64624sdibqf5sgoc4rxi4d2xibawkk6oi52qydz4lwe"
- },
- "linkedBlock": "o2RkYXRhoWV0aXRsZXFNeSBmaXJzdCBEb2N1bWVudGZoZWFkZXKiZnNjaGVtYfZrY29udHJvbGxlcnOBeDhkaWQ6a2V5Ono2TWtmWjZTNE5WVlRFdXRzOG81eEZ6Uk1SOGVDNlkxYm5nb0JRTm5YaUN2aEg4SGZ1bmlxdWVwenh0b1A5blphdVgxcEE0OQ"
- }
- },
- {
- "cid": "bagcqcera3fkje7je4lvctkam4fvi675avtcuqgrv7dn6aoqljd5lebpl7rfq",
- "value": {
- "jws": {
- "payload": "AXESINm6lI30m3j5H2ausx-ulXj-L9CmFlOTZBZvJ2O734Zt",
- "signatures": [
- {
- "signature": "zsLJbBSU5xZTQkYlXwEH9xj_t_8frvSFCYs0SlVMPXOnw8zOJOsKnJDQlUOvPJxjt8Bdc_7xoBdmcRG1J1tpCw",
- "protected": "eyJhbGciOiJFZERTQSIsImtpZCI6ImRpZDprZXk6ejZNa2ZaNlM0TlZWVEV1dHM4bzV4RnpSTVI4ZUM2WTFibmdvQlFOblhpQ3ZoSDhII3o2TWtmWjZTNE5WVlRFdXRzOG81eEZ6Uk1SOGVDNlkxYm5nb0JRTm5YaUN2aEg4SCJ9"
- }
- ],
- "link": "bafyreigzxkki35e3pd4r6zvowmp25fly7yx5bjqwkojwiftpe5r3xx4gnu"
- },
- "linkedBlock": "pGJpZNgqWCYAAYUBEiDRQJ7VCtGQWcLlmFpitGoSP35ntX7fKJeFWJ8zKz2+Z2RkYXRhgaNib3BjYWRkZHBhdGhlL21vcmVldmFsdWUY6mRwcmV22CpYJgABhQESINFAntUK0ZBZwuWYWmK0ahI/fme1ft8ol4VYnzMrPb5nZmhlYWRlcqFrY29udHJvbGxlcnOA"
- }
- }
- ]
- }
- ```
-
-### Applying a new commit to stream
-
-:::note
-**Disabled in gateway mode**
-:::
-
-In order to modify a stream we apply a commit to its log. This commit usually
-contains a signature over a _json-patch_ diff describing a modification to the
-stream contents. The commit also needs to contain pointers to the previous
-commit and other metadata. You can read more about this in the
-[Ceramic Specification](https://github.com/ceramicnetwork/.github/blob/main/LEGACY_SPECIFICATION.md).
-Different stream types may have different formats for their commits. If you want
-to see an example implementation for how to construct a commit you can have a
-look at the implementation of the TileDocument.
-
-=== "Request"
-
- ```bash
- POST /api/v0/commits
- ```
-
- #### Request body fields:
-
- - `streamId` - the StreamID of the stream to apply the commit to, string
- - `commit` - the content of the commit to apply (will differ per streamtype)
- - `opts` - options for the stream update [UpdateOpts](https://developers.ceramic.network/reference/typescript/interfaces/_ceramicnetwork_common.UpdateOpts.html) (optional)
-
-=== "Response"
-
- * `streamId` - the StreamID of the stream that was modified
- * `state` - the new state of the stream that was modified, [StreamState](https://developers.ceramic.network/reference/typescript/interfaces/_ceramicnetwork_common.StreamState.html)
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/commits -X POST -d '{
- "streamId": "kjzl6cwe1jw14ahmwunhk9yjwawac12tb52j1uj3b9a57eohmhycec8778p3syv",
- "commit": {
- "jws": {
- "payload": "AXESINm6lI30m3j5H2ausx-ulXj-L9CmFlOTZBZvJ2O734Zt",
- "signatures": [
- {
- "signature": "zsLJbBSU5xZTQkYlXwEH9xj_t_8frvSFCYs0SlVMPXOnw8zOJOsKnJDQlUOvPJxjt8Bdc_7xoBdmcRG1J1tpCw",
- "protected": "eyJhbGciOiJFZERTQSIsImtpZCI6ImRpZDprZXk6ejZNa2ZaNlM0TlZWVEV1dHM4bzV4RnpSTVI4ZUM2WTFibmdvQlFOblhpQ3ZoSDhII3o2TWtmWjZTNE5WVlRFdXRzOG81eEZ6Uk1SOGVDNlkxYm5nb0JRTm5YaUN2aEg4SCJ9"
- }
- ],
- "link": "bafyreigzxkki35e3pd4r6zvowmp25fly7yx5bjqwkojwiftpe5r3xx4gnu"
- },
- "linkedBlock": "pGJpZNgqWCYAAYUBEiDRQJ7VCtGQWcLlmFpitGoSP35ntX7fKJeFWJ8zKz2+Z2RkYXRhgaNib3BjYWRkZHBhdGhlL21vcmVldmFsdWUY6mRwcmV22CpYJgABhQESINFAntUK0ZBZwuWYWmK0ahI/fme1ft8ol4VYnzMrPb5nZmhlYWRlcqFrY29udHJvbGxlcnOA"
- }
- }' -H "Content-Type: application/json"
- ```
-
-=== "Response"
-
- ```bash
- {
- "streamId": "kjzl6cwe1jw14ahmwunhk9yjwawac12tb52j1uj3b9a57eohmhycec8778p3syv",
- "state": {
- "type": 0,
- "content": {
- "title": "My first Document"
- },
- "metadata": {
- "schema": null,
- "controllers": [
- "did:key:z6MkfZ6S4NVVTEuts8o5xFzRMR8eC6Y1bngoBQNnXiCvhH8H"
- ]
- },
- "signature": 2,
- "anchorStatus": "PENDING",
- "log": [
- {
- "cid": "bagcqcera2faj5vik2giftqxftbngfndkci7x4z5vp3psrf4flcptgkz5xztq",
- "type": 0
- },
- {
- "cid": "bagcqcera3fkje7je4lvctkam4fvi675avtcuqgrv7dn6aoqljd5lebpl7rfq",
- "type": 1
- }
- ],
- "anchorScheduledFor": "12/30/2020, 1:15:00 PM",
- "next": {
- "content": {
- "title": "My first Document",
- "more": 234
- },
- "metadata": {
- "schema": null,
- "controllers": []
- }
- }
- }
- }
- ```
-
-## Pins API
-
-The `pins` api endpoint can be used to manipulate the pinset. The pinset is all
-of the streams that a node maintains the state of. Any stream opened by the node
-that is not pinned will eventually be garbage collected from the node.
-
-### Adding to pinset
-
-:::note
-**Disabled in gateway mode**
-:::
-
-This method adds the stream with the given StreamID to the pinset.
-
-=== "Request"
-
- ```bash
- POST /api/v0/pins/:streamid
- ```
-
- Here, `:streamid` should be replaced by the string representation of the StreamID of the stream that is being requested.
-
-=== "Response"
-
- If the operation was successful the response will be a 200 OK.
-
- * `streamId` - the StreamID of the stream which was pinned, string
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/pins/k2t6wyfsu4pg2qvoorchoj23e8hf3eiis4w7bucllxkmlk91sjgluuag5syphl -X POST
- ```
-
-=== "Response"
-
- ```bash
- {
- "streamId": "k2t6wyfsu4pg2qvoorchoj23e8hf3eiis4w7bucllxkmlk91sjgluuag5syphl"
- }
- ```
-
-### Removing from pinset
-
-:::note
-**Disabled in gateway mode**
-:::
-
-This method removes the stream with the given StreamID from the pinset.
-
-=== "Request"
-
- ```bash
- DELETE /api/v0/pins/:streamid
- ```
-
- Here, `:streamid` should be replaced by the string representation of the StreamID of the stream that is being requested.
-
-=== "Response"
-
- If the operation was successful the response will be a 200 OK.
-
- * `streamId` - the StreamID of the stream which was unpinned, string
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/pins/k2t6wyfsu4pg2qvoorchoj23e8hf3eiis4w7bucllxkmlk91sjgluuag5syphl -X DELETE
- ```
-
-=== "Response"
-
- ```bash
- {
- "streamId": "k2t6wyfsu4pg2qvoorchoj23e8hf3eiis4w7bucllxkmlk91sjgluuag5syphl"
- }
- ```
-
-### Listing streams in pinset
-
-Calling this method allows you to list all of the streams that are in the pinset
-on this node.
-
-=== "Request"
-
- ```bash
- GET /api/v0/pins
- ```
-
-=== "Response"
-
- * `pinnedStreamIds` - an array of StreamID strings that are in the pinset
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/pins
- ```
-
-=== "Response"
-
- ```bash
- {
- "pinnedStreamIds": [
- "k2t6wyfsu4pfwqaju0w9nmi53zo6f5bcier7vc951x4b9rydv6t8q4pvzd5w3l",
- "k2t6wyfsu4pfxon8reod8xcyka9bujeg7acpz8hgh0jsyc7p2b334izdyzsdp7",
- "k2t6wyfsu4pfxqseec01fnqywmn8l93p4g2chzyx3sod3hpyovurye9hskcegs",
- "k2t6wyfsu4pfya9y0ega1vnokf0g5qaus69basy52oxg50y3l35vm9rqbb88t3"
- ]
- }
- ```
-
-### Checking inclusion in pinset
-
-This method is used to check if a particular stream is in the pinset.
-
-=== "Request"
-
- ```bash
- GET /api/v0/pins/:streamid
- ```
-
- Here, `:streamid` should be replaced by the string representation of the StreamID of the stream that is being requested.
-
-=== "Response"
-
- * `pinnedStreamIds` - an array containing the specified StreamID string if that stream is pinned, or an empty array if that stream is not pinned
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/pins/k2t6wyfsu4pg2qvoorchoj23e8hf3eiis4w7bucllxkmlk91sjgluuag5syphl
- ```
-
-=== "Response"
-
- ```bash
- {
- "pinnedStreamIds": ["k2t6wyfsu4pg2qvoorchoj23e8hf3eiis4w7bucllxkmlk91sjgluuag5syphl"]
- }
- ```
-
-## Node Info APIs
-
-The methods under the `/node` path provides more information about this
-particular node.
-
-### Supported blockchains for anchoring
-
-Get all of the
-[CAIP-2](https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-2.md)
-_chainIds_ supported by this node.
-
-=== "Request"
-
- ```bash
- GET /api/v0/node/chains
- ```
-
-=== "Response"
-
- The response body contains the following fields:
-
- - `supportedChains` - and array with [CAIP-2](https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-2.md) formatted chainIds
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/node/chains
- ```
-
-=== "Response"
-
- ```bash
- {
- "supportedChains": ["eip155:3"]
- }
- ```
-
-### Health check
-
-Check the health of the node and the machine it's running on. Run
-`ceramic daemon -h` for more details on how this can be configured.
-
-=== "Request"
-
- ```bash
- GET /api/v0/node/healthcheck
- ```
-
-=== "Response"
-
- Either a `200` response with the text `Alive!`, or a `503` with the text `Insufficient resources`.
-
-#### Example
-
-=== "Request"
-
- ```bash
- curl http://localhost:7007/api/v0/node/healthcheck
- ```
-
-=== "Response"
-
- ```bash
- Alive!
- ```
-
-### Node status
-
-The node status endpoint exposes information about the node's status.
-
-:::note
-**Admin DID required**
-:::
-
-Access to this endpoint is restricted to admin DIDs, the request headers need to
-contain a signature for the request. The recommended way to interact with this
-endpoint is using the CLI with the `ceramic status` command.
-
-=== "Request"
-
- ```bash
- GET /api/v0/admin/status
- ```
-
-=== "Response"
-
- Either a `200` response with the JSON payload, or a server error.
-
-#### Example
-
-=== "Command"
-
- ```bash
- ceramic status
- ```
-
-=== "Response"
-
- ```json
- {
- "runId": "7647439f-44fa-4aff-b3c8-b7e16015c52e",
- "uptimeMs": 27638,
- "network": "inmemory",
- "anchor": {
- "anchorServiceUrl": "",
- "ethereumRpcEndpoint": null,
- "chainId": "inmemory:12345"
- },
- "ipfs": {
- "peerId": "12D3KooWRzv8fM4oV6jRj8nsg8kxo3Z9u26vVXLaUKiLbuoV3Vtp",
- "addresses": [
- "/ip4/127.0.0.1/tcp/4011/p2p/12D3KooWRzv8fM4oV6jRj8nsg8kxo3Z9u26vVXLaUKiLbuoV3Vtp",
- "/ip4/192.168.0.101/tcp/4011/p2p/12D3KooWRzv8fM4oV6jRj8nsg8kxo3Z9u26vVXLaUKiLbuoV3Vtp"
- ]
- },
- "composeDB": {
- "indexedModels": []
- }
- }
- ```
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/pinning.md b/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/pinning.md
deleted file mode 100644
index b1fac615..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/pinning.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Pinning
-
-Pinning allows you to persist and make streams available on a Ceramic node beyond a single session. This guide demonstrates how to add and remove streams from your node's pinset, and how to list the streams currently in the pinset. In order to interact with a pinset, you must have [installed a Ceramic client](./ceramic-http.md).
-
-## Overview
-
-By default Ceramic will garbage collect any stream that has been written or [queried](./queries.md) on your node after some period of time. In order to prevent the loss of streams due to garbage collection, you need to explicitly pin the streams that you wish to persist. Pinning instructs the node to keep them around in persistent storage until they are explicitly unpinned.
-
-## **Pin a stream while creating it**
-
-Most StreamTypes will allow you to request that a Stream be pinned at the same time that you create the Stream. An example using the TileDocument Streamtype is below:
-
-```javascript
-await TileDocument.create(ceramic, content, null, { pin: true })
-```
-
-## **Add to pinset**
-
-Use the `pin.add()` method to add an existing stream to your permanent pinset.
-
-```javascript
-const streamId = 'kjzl6cwe1jw14...'
-await ceramic.admin.pin.add(streamId)
-```
-
-
-## **Remove from pinset**
-
-Use the `pin.rm()` method to remove a stream from your permanent pinset.
-
-```javascript
-const streamId = 'kjzl6cwe1jw14...'
-await ceramic.admin.pin.rm(streamId)
-```
-
-
-## **List streams in pinset**
-
-Use the `pin.ls()` method to list streams currently in your permanent pinset.
-
-```javascript
-const streamIds = await ceramic.admin.pin.ls()
-```
-
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/queries.md b/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/queries.md
deleted file mode 100644
index a2359f01..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/queries.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# Queries
-
-This guide demonstrates how to query streams during runtime using the [JS HTTP](./ceramic-http.md) and JS Core clients.
-
-## **Requirements**
-
-You need to have an [installed client](./ceramic-http.md) to perform queries during runtime.
-
-## **Query a stream**
-
-Use the `loadStream()` method to load a single stream using its _StreamID_.
-
-```javascript
-const streamId = 'kjzl6cwe1jw14...'
-const stream = await ceramic.loadStream(streamId)
-```
-
-:::caution
-
- When using the Typescript APIs, `loadStream` by default returns an object of type `Stream`, which will not have any methods available to perform updates, or any other streamtype-specific methods or accessors. To be able to perform updates, as well as to access streamtype-specific data or functionality, you need to specialize the `loadStream` method on the StreamType of the Stream being loaded.
-:::
-
-
-## **Query a stream at a specific commit**
-
-If you want to see the contents of a stream as of a specific point in time, it's possible to pass a _CommitID_ instead of a _StreamID_ to the `loadStream()` method described above. This will cause the Stream to be loaded at the specified commit, rather than the current commit as loaded from the network. When loading with a CommitID, the returned Stream object will be marked as readonly and cannot be used to perform updates. If you wish to perform updates, load a new instance of the Stream using its StreamID.
-
-## **Query multiple streams**
-
-Use the `multiQuery()` method to load multiple streams at once. The returned object is a map from _StreamIDs_ to stream instances.
-
-```javascript
-const queries = [
- {
- streamId: 'kjzl6cwe1jw...14',
- },
- {
- streamId: 'kjzl6cwe1jw...15',
- },
-]
-const streamMap = await ceramic.multiQuery(queries)
-```
-
-
-## **Query a stream using paths**
-
-Use the `multiQuery()` method to load one or more streams using known paths from a root stream to its linked streams.
-
-Imagine a stream `kjzl6cwe1jw...14` whose content contains the StreamIDs of two other streams. These StreamIDs exist at various levels within a nested JSON structure.
-
-```javascript
-{
- a: 'kjzl6cwe1jw...15',
- b: {
- c: 'kjzl6cwe1jw...16'
- }
-}
-```
-
-In the stream above, the path from root stream `kjzl6cwe1jw...14` to linked stream `kjzl6cwe1jw...15` is `/a` and the path to linked stream `kjzl6cwe1jw...16` is `/b/c`. Using the StreamID of the root stream and the paths outlined here, we use `multiQuery()` to query all three streams at once without needing to explicitly know the StreamIDs of the two linked streams.
-
-The `multiQuery()` below will return a map with all three streams.
-
-```javascript
-const queries = [{
- streamId: 'kjzl6cwe1jw...14'
- paths: ['/a', '/b/c']
-}]
-const streamMap = await ceramic.multiQuery(queries)
-```
-
-
-## **Helper methods**
-
-To get specific information about the stream that you created or loaded you can use the accessors on the `Stream` class. Below are some examples.
-
-
-
-### Get StreamID
-
-Use the `stream.id` property to get the unique `StreamID` for this stream.
-
-```javascript
-const streamId = stream.id
-```
-
-
-
-### Get latest commit
-
-Use the `stream.commitId` property to get latest CommitID of a stream.
-
-```javascript
-const commitId = stream.commitId
-```
-
-
-
-### Get all anchor commits
-
-Use the `stream.anchorCommitIds` property to get all CommitIDs which are anchor commits for this stream.
-
-```javascript
-const anchorCommits = stream.anchorCommitIds
-```
diff --git a/docs/protocol/js-ceramic/guides/ceramic-clients/stream-api/caip10-link.md b/docs/protocol/js-ceramic/guides/ceramic-clients/stream-api/caip10-link.md
deleted file mode 100644
index 382f340c..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-clients/stream-api/caip10-link.md
+++ /dev/null
@@ -1,133 +0,0 @@
-# CAIP-10 Link client
-
----
-
-A CAIP-10 Link is a stream that stores a proof that links a blockchain address to a Ceramic account (DID), using the [CAIP-10 standard](https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-10.md) to represent blockchain addresses.
-
-
-## Installation
-
----
-
-```sh
-npm install @ceramicnetwork/stream-caip10-link
-```
-
-### Additional requirements
-
-- In order to load CAIP-10 Links, a [Ceramic client instance](../javascript-clients/ceramic-http.md) must be available
-- To add/remove links, the client must also have an [authenticated DID](../authentication/did-jsonrpc.md)
-- An authentication provider is needed to sign the payload for the given CAIP-10 account, using the `blockchain-utils-linking` module that should be installed as needed:
-
-```sh
-npm install @ceramicnetwork/blockchain-utils-linking
-```
-
-## Common usage
-
----
-
-### Load a link
-
-In this example we load a Caip10Link for the account `0x054...7cb8` on the Ethereum mainnet blockchain (`eip155:1`).
-
-```ts
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { Caip10Link } from '@ceramicnetwork/stream-caip10-link'
-
-const ceramic = new CeramicClient()
-
-async function getLinkedDID() {
- // Using the Ceramic client instance, we can load the link for a given CAIP-10 account
- const link = await Caip10Link.fromAccount(
- ceramic,
- '0x0544dcf4fce959c6c4f3b7530190cb5e1bd67cb8@eip155:1',
- )
- // The `did` property of the loaded link will contain the DID string value if set
- return link.did
-}
-```
-
-### Create a link
-
-Here we can see the full flow of getting a user's Ethereum address, creating a link, and adding the users' DID account.
-
-In this example we create a Caip10Link for the account `0x054...7cb8` on the Ethereum mainnet blockchain (`eip155:1`) and then associate it with the DID `did:3:k2t6...ydki`.
-
-```ts
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { Caip10Link } from '@ceramicnetwork/stream-caip10-link'
-import { EthereumAuthProvider } from '@ceramicnetwork/blockchain-utils-linking'
-
-const ceramic = new CeramicClient()
-
-async function linkCurrentAddress() {
- // First, we need to create an EthereumAuthProvider with the account currently selected
- // The following assumes there is an injected `window.ethereum` provider
- const addresses = await window.ethereum.request({
- method: 'eth_requestAccounts',
- })
- const authProvider = new EthereumAuthProvider(window.ethereum, addresses[0])
-
- // Retrieve the CAIP-10 account from the EthereumAuthProvider instance
- const accountId = await authProvider.accountId()
-
- // Load the account link based on the account ID
- const accountLink = await Caip10Link.fromAccount(
- ceramic,
- accountId.toString(),
- )
-
- // Finally, link the DID to the account using the EthereumAuthProvider instance
- await accountLink.setDid(
- 'did:3:k2t6wyfsu4pg0t2n4j8ms3s33xsgqjhtto04mvq8w5a2v5xo48idyz38l7ydki',
- authProvider,
- )
-}
-```
-
-### Remove a link
-
-Removing a link involves a similar flow to setting the DID, but using the `clearDid` method instead of `setDid`:
-
-```ts
-import { CeramicClient } from '@ceramicnetwork/http-client'
-import { Caip10Link } from '@ceramicnetwork/stream-caip10-link'
-import { EthereumAuthProvider } from '@ceramicnetwork/blockchain-utils-linking'
-
-const ceramic = new CeramicClient()
-
-async function unlinkCurrentAddress() {
- // First, we need to create an EthereumAuthProvider with the account currently selected
- // The following assumes there is an injected `window.ethereum` provider
- const addresses = await window.ethereum.request({
- method: 'eth_requestAccounts',
- })
- const authProvider = new EthereumAuthProvider(window.ethereum, addresses[0])
-
- // Retrieve the CAIP-10 account from the EthereumAuthProvider instance
- const accountId = await authProvider.accountId()
-
- // Load the account link based on the account ID
- const accountLink = await Caip10Link.fromAccount(
- ceramic,
- accountId.toString(),
- )
-
- // Finally, unlink the DID from the account using the EthereumAuthProvider instance
- await accountLink.clearDid(authProvider)
-}
-```
-
-
diff --git a/docs/protocol/js-ceramic/guides/ceramic-nodes/running-cloud.md b/docs/protocol/js-ceramic/guides/ceramic-nodes/running-cloud.md
deleted file mode 100644
index 9a58cd60..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-nodes/running-cloud.md
+++ /dev/null
@@ -1,271 +0,0 @@
-# Running Ceramic nodes in the cloud environment
-
----
-
-This guide provides the instructions for launching a well-connected, production-ready Ceramic node in the cloud environment.
-
-## Who should run a Ceramic node?
-
----
-
-To run your application on `mainnet` you'll need to run your own production-ready node.
-
-## Things to know
-
----
-
-**Ceramic networks**
-There are currently three main Ceramic networks:
-- `mainnet`
-- `testnet-clay`
-- `dev-unstable`
-
-Learn more about each network [here](../../networking/networks.md).
-
-By default, Ceramic will connect to `testnet-clay` and a [Ceramic Anchor Service](https://github.com/ceramicnetwork/ceramic-anchor-service) running on Gnosis. When you are ready to get on Ceramic `mainnet`, check out [this guide](../../../../composedb/guides/composedb-server/access-mainnet) to get access to our `mainnet` anchor service running on Ethereum mainnet.
-
-**Supported platforms** – You can run Ceramic nodes on a cloud provider of your choice. This guide will include instructions for the Digital Ocean Kubernetes, but the
-instructions can be applied to the vast majority of other cloud providers like AWS and others.
-
-**Supported Operating Systems:**
-
-- Linux
-
-:::note
-At the moment, developers are provided with Linux-based docker images for cloud deployment.
-:::
-
-**Compute requirements:**
-
-You’ll need sufficient compute resources to power `ceramic-one`, `js-ceramic` and `PostgreSQL`. Below are the recommended requirements:
-
-- 4 vCPUs
-- 8GB RAM
-
-## Required steps
-
----
-
-Below are the steps required for running a Ceramic node in production. This guide will teach you how to:
-
-
-### Configure your Kubernetes Cluster
-
-Running a Ceramic Node on DO Kubernetes will require two tools:
-
-- [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes command line tool
-- [doctl](https://docs.digitalocean.com/reference/doctl/how-to/install/) - the Digital Ocean command line tool
-
-Make sure you have these tools installed on your machine before proceeding to the next step of this guide.
-
-To create a Digital Ocean Kuberetes cluster, follow an official [DigitalOcean tutorial](https://docs.digitalocean.com/products/kubernetes/how-to/create-clusters/). The process of setting up your Kubernetes cluster will take about 10 minutes. Once it’s up and running, you are good to continue with the next step.
-
-### Connect to your Kubernetes Cluster
-
-Once the cluster is up and running, you will be provided a command that you can use to authenticate your cluster on your local machine. You will be provided with a command unique to your cluster, but For example:
-
-```doctl kubernetes cluster kubeconfig save 362dda8b-b555-4c47-9bf0-1a81cf58e0a8```
-
-Run this command on your local machine using your local terminal. After authenticating, verify the connectivity:
-
-```kubectl config get-contexts```
-
-### Deploy Ceramic
-
-Running a Ceramic node will require configuring three components:
-- `ceramic-one` - a binary which contains the Ceramic Recon protocol implementation in Rust
-- `js-ceramic` - component which provides the API interface for Ceramic applications
-- `postgres` - a database used for indexing
-
-To simplify the configuration of all these services, you can use the [SimpleDeploy](https://github.com/ceramicstudio/simpledeploy/tree/main), a set of infra scripts that will make the configuration process faster and easier.
-
-1. Clone the [simpledeploy](https://github.com/ceramicstudio/simpledeploy.git) repository and enter `ceramic-one` folder of the created directory:
-
-```
-git clone https://github.com/ceramicstudio/simpledeploy.git
-cd simpledeploy/k8s/base/ceramic-one
-```
-
-2. Create a namespace for the nodes:
-
-```
-export CERAMIC_NAMESPACE=ceramic-one-0-17-0
-kubectl create namespace ${CERAMIC_NAMESPACE}
-```
-
-3. Create ephemereal secrets for js-ceramic and postgres
-
-```
-./scripts/create-secrets.sh
-```
-
-4. Apply manifests:
-
-```
-kubectl apply -k .
-```
-
-5. Wait for the pods to start. It will take a few minutes for the deployment to pull the docker images and start the containers. You can watch the process with the following command:
-
-```
-kubectl get pods --watch --namespace ceramic-one-0-17-0
-```
-
-You will know that your deployment is up and running when all of the processes have a status `Running` as follows:
-
-```bash
-NAME READY STATUS RESTARTS AGE
-ceramic-one-0 1/1 Running 0 77s
-ceramic-one-1 1/1 Running 0 77s
-js-ceramic-0 1/1 Running 0 77s
-js-ceramic-1 1/1 Running 0 77s
-postgres-0 1/1 Running 0 77s
-```
-
-Hit `^C` on your keyboard to exit this view.
-
-:::note
-
-You can easily access the logs of each of the containers by using the command below and configuring the container name. For example, to access the Ceramic node logs, you can run:
-
-`kubectl logs --follow --namespace ceramic-one-0-17-0 js-ceramic-0`
-
-:::
-
-### Accessing your node
-
-The Ceramic daemon serves an HTTP API that clients use to interact with your Ceramic node. The default API port is `7007`. SimpleDeploy scripts include a Load Balancer configuration for `js-ceramic` and `ceramic-one` pods which allows you to expose the service to the outside world and interact with your node using an external IP. For example, you can access the external IP of the `js-ceramic` pod using the following command:
-
-`kubectl get svc --namespace ceramic-one-0-17-0 js-ceramic-lb-1`
-
-After running this command you will see an output similar to the following:
-
-```bash
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-js-ceramic-lb-1 LoadBalancer 10.245.205.115 152.42.151.112 7007:30614/TCP 18m
-```
-
-The `EXTERNAL-IP` can be used to accessing your `js-ceramic` node. To test it out, copy the external IP address provided above and substitute it in the following health check command:
-
-`curl 152.42.151.112:7007/api/v0/node/healthcheck`
-
-You should see the output stating that the connection is alive:
-
-`Alive!`
-
-
-### Connect to the mainnnet anchor service
-By default, your Ceramic node will connect to the Ceramic `clay-testnet`. In order to connect your application to the mainnet, you will have to configure your node and verify you node DID for using the Ceramic Anchor Service (CAS). You can find a detailed step-by-step guide [here](../../../../composedb/guides/composedb-server/access-mainnet).
-
-
-
----
-
-### Example with Docker containers
-
-All state in this configuration is ephemeral, for persistence use docker-compose.
-
-1. Start ceramic-one using the host network
-
-```json
-docker run --network=host \
- public.ecr.aws/r5b3e0r5/3box/ceramic-one:latest
-```
-
-2. Start js-ceramic using the host network
-
-```json
-docker run --network=host ceramicnetwork/js-ceramic:develop
-```
-
-### Docker-compose
-
-1. Create a testing directory, and enter it.
-
-```yaml
-mkdir ceramic-recon
-cd ceramic-recon
-```
-
-2. Create a file colled `docker-compose.yaml` with the configuration shown in the example below and save it:
-
-```
-version: '3.8'
-
-services:
- ceramic-one:
- image: public.ecr.aws/r5b3e0r5/3box/ceramic-one:0.19.0
- network_mode: "host"
- volumes:
- - ceramic-one-data:/root/.ceramic-one
-
- js-ceramic:
- image: ceramicnetwork/js-ceramic:develop
- environment:
- - CERAMIC_RECON_MODE=true
- network_mode: "host"
- volumes:
- - js-ceramic-data:/root/.ceramic
- - ./daemon.config.json:/root/.ceramic/daemon.config.json
- command: --ipfs-api http://localhost:5101
-
-volumes:
- ceramic-one-data:
- driver: local
- js-ceramic-data:
- driver: local
-```
-
-3. Update the js-ceramic configuration file `daemon.config.json` with the configurations provided below.
-
-:::note
-The js-ceramic configuration file can be found using the following path: `$HOME/.ceramic/daemon.config.json `
-:::
-
-
-```json
-{
- "anchor": {
- "auth-method": "did"
- },
- "http-api": {
- "cors-allowed-origins": [
- ".*"
- ],
- "admin-dids": [
- ]
- },
- "ipfs": {
- "mode": "remote",
- "host": "http://localhost:5101"
- },
- "logger": {
- "log-level": 2,
- "log-to-files": false
- },
- "metrics": {
- "metrics-exporter-enabled": false,
- "prometheus-exporter-enabled": true,
- "prometheus-exporter-port": 9465
- },
- "network": {
- "name": "testnet-clay"
- },
- "node": { },
- "state-store": {
- "mode": "fs",
- "local-directory": "/root/.ceramic/statestore/"
- },
- "indexing": {
- "db": "sqlite://root/.ceramic/db.sqlite3",
- "allow-queries-before-historical-sync": true,
- "disable-composedb": false,
- "enable-historical-sync": false
- }
-}
-```
-
-3. Run `docker-compose up -d`
-
-
----
diff --git a/docs/protocol/js-ceramic/guides/ceramic-nodes/running-locally.md b/docs/protocol/js-ceramic/guides/ceramic-nodes/running-locally.md
deleted file mode 100644
index a6b4b8fc..00000000
--- a/docs/protocol/js-ceramic/guides/ceramic-nodes/running-locally.md
+++ /dev/null
@@ -1,96 +0,0 @@
-# Launch a local Ceramic node
-
----
-
-To run a local Ceramic node you will generally need to run two key components:
-- `js-ceramic` - an api interface for Ceramic applications
-- `ceramic-one` - a binary that provides a Ceramic data network access through the protocol implementation in Rust.
-
-You should always start with running the `ceramic-one` component first to make sure that the `js-ceramic` component can connect to it.
-
-## Prerequisites
-
----
-
-Installing the `js-ceramic` requires the following:
-- a terminal of your choice,
-- [Node.js](https://nodejs.org/en/) v20,
-- [npm](https://www.npmjs.com/get-npm) v10
-
-Make sure to have these installed on your machine.
-
-
-## Setting up the `ceramic-one` component
-
-The easiest way to install the `ceramic-one` is using [Homebrew](https://brew.sh/) package manager. After installing Homebrew on your local machine, you can install `ceramic-one` using the following command:
-
-```bash
-brew install ceramicnetwork/tap/ceramic-one
-```
-
-Once installed, run the ceramic-one binary by running the command provided below. Not that using the flag `--network` you can modify the network:
-
-```bash
-ceramic-one daemon --network testnet-clay
-```
-
-:::note
-There are many flags for the daemon CLI that can be passed directly or set as environment variables. You can pass the `-h` flag to see the complete list as follows:
-
-```ceramic-one daemon -h```
-:::
-
-You also have an option of running the `ceramic-one` binary using Docker. Check out the instructions in the [README of rust-ceramic repository](https://github.com/ceramicnetwork/rust-ceramic?tab=readme-ov-file).
-
-
-## Setting up the `js-ceramic` component
-
-The Ceramic command line interface provides an easy way to start a JS Ceramic node in a local Node.js environment. This is a great way to get started developing with Ceramic before moving to a cloud-hosted node for production use cases.
-
-
-### Install the Ceramic CLI
-
-Open your console and install the CLI using npm:
-
-```bash
-npm install -g @ceramicnetwork/cli
-```
-
-### Launch the `js-ceramic` node
-
-Use the `ceramic daemon` command to start a local JS Ceramic node connected to the [Clay Testnet](../../networking/networks.md#clay-testnet) by default running at `https://localhost:7007`:
-
-```bash
-ceramic daemon
-```
-
-### Configure your network
-
-(Optional) By default, the JS CLI starts a node on the [Clay Testnet](../../networking/networks.md#clay-testnet). If you would like to use a different network, you can specify this using the `--network` option. View [available networks](../../networking/networks.md). Note, the CLI can not be used with [Mainnet](../../networking/networks.md#mainnet).
-
-### Configure a node URL
-
-(Optional) It is possible to use the CLI with a remote Ceramic node over HTTP, instead of a local node. To do this, use the `config set` command to set the `ceramicHost` variable to the URL of the node you wish to use.
-
-```bash
-ceramic config set ceramicHost 'https://yourceramicnode.com'
-```
-
-## Monitoring
-You can always check if `js-ceramic` and `ceramic-one` components are available by running the commands listed below.
-
-### `js-ceramic` service's availability
-
-Check the `js-ceramic` service’s availability with the healthcheck endpoint:
-
-```json
-curl http://localhost:7007/api/v0/node/healthcheck
-```
-
-### `ceramic-one` service's availability
-
-Check the ceramic-one service’s availability with the liveness endpoint:
-
-```json
-curl http://127.0.0.1:5101/ceramic/liveness
-```
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/guides/guides-index.md b/docs/protocol/js-ceramic/guides/guides-index.md
deleted file mode 100644
index 2567b869..00000000
--- a/docs/protocol/js-ceramic/guides/guides-index.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# Ceramic Development Guides
----
-
-Guides that support development on Ceramic.
-
-### Ceramic Nodes
-
-- [**Running Locally**](./ceramic-nodes/running-locally.md)
-- [**Running in the Cloud**](./ceramic-nodes/running-cloud.md)
-
-### Ceramic Clients
-
-- [**JavaScript Client**](./ceramic-clients/javascript-clients/ceramic-http.md)
-- [**Authentication**](./ceramic-clients/authentication/key-did.md)
-- [**Stream APIs**](./ceramic-clients/stream-api/caip10-link.md)
diff --git a/docs/protocol/js-ceramic/networking/data-feed-api.md b/docs/protocol/js-ceramic/networking/data-feed-api.md
deleted file mode 100644
index d3387dcc..00000000
--- a/docs/protocol/js-ceramic/networking/data-feed-api.md
+++ /dev/null
@@ -1,167 +0,0 @@
-# Data Feed API
-
-The Ceramic Data Feed API gives developers a way to keep track of all the new state changes that are happening in the Ceramic network. There are 2 scenarios that would trigger an update on the feed:
-
-1. Writes explicitly sent to the Ceramic node via the HTTP Client
-2. Writes discovered from the network for Streams belonging to Models that are indexed on the Ceramic node
-
-This information can be used to take actions or simply stay updated on the current status of a stream or even a network. Data Feed API enables developers to build custom indexers or databases.
-
-
-# Server-Sent Events and EventSource interface
-To understand Data Feed API, it's important to have a basic understanding of [Server-Sent Events (SSE)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) and the [EventSource](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) interface.
-
-SSE is a simple and efficient way for servers to send real-time updates to web clients over a single HTTP connection. It works with the standard HTTP protocol, which makes it great for situations where the server needs to constantly update the client.
-
-The EventSource interface is a JavaScript API that makes it easy for web applications to consume SSE. It allows clients to receive updates as a stream of events, making it simple to integrate real-time data into web apps.
-
----
-
-# Getting started
-The guide below will cover the main steps you need to follow to start interacting with Data Feed API.
-
-## Configure your working environment
-
-### 1. Run a Ceramic node
-To interact with Data Feed API you will need a Ceramic testnet or mainnet node up and running. Check out the [Quickstart](../../../composedb/set-up-your-environment.mdx) for instructions on how to run Ceramic nodes locally and [Running in the Cloud](../../../composedb/guides/composedb-server/running-in-the-cloud.mdx) guide for instructions on how to run a Ceramic node in the cloud.
-
-:::tip
-Make sure that your Ceramic node is using the Ceramic version 5.3 or higher to make sure that it supports the Data Feed logic.
-:::
-
-### 2. Install additional dependencies
-Depending on how you use the Data Feed API, you may need additional dependencies installed on your machine:
-- Cross-eventsource to use EventSource isomorphically on Node.js and browser:
-
-```bash
-npm i cross-eventsource
-```
-
-- `@ceramicnetwork/codecs` and `codeco` for encoding and decoding:
-```bash
-npm i @ceramicnetwork/codecs codeco
-```
-
-## Interact with the Data Feed API
-
-Below you can see a few examples of how you can interact with the Data Feed API. Currently, Data Feed API is available as read-only with support for `GET` methods and access to Ceramic's aggregation layer.
-
-The following request `GET` will return the following type of objects as activity is done on the Ceramic node:
-
-**Request:**
-`GET /api/v0/feed/aggregation/documents`
-
-**Response:**
-```javascript
-FeedDocument = {
- commitId: CommitID
- content: any
- metadata: StreamMetadata
- eventType: EventType
-}
-```
-
-For example, the following request will return a response with the details provided below.
-**Request:**
- `curl http://localhost:7007/api/v0/feed/aggregation/documents`
-
-**Response:**
-```javascript
-data: {
- "commitId": "k6zn3t2py84tn1dpy24625xjv65g4r23wuqpch6mmrywshreivaqiyaqctrz2ba5kk0qjvec61pbmyl15b49zxfd8qd3aiiupltnpveh45oiranqr4njj40",
- "content": "{...}",
- "metadata": {
- "controllers": [
- "did:key:z6MknE3RuK7XU2W1KGCQrsSVhzRwCUJ9uMb6ugwbELm9JdP6"
- ],
- "model": "kh4q0ozorrgaq2mezktnrmdwleo1d"
- },
- "eventType": 2
-}
-
-```
-
-
-
-The recommended way of interacting with the Data Feed API is by using event listeners as show in the example below. The provided example is using `localhost:7007` as the host:
-
-```typescript
-import { EventSource } from "cross-eventsource";
-import { JsonAsString, AggregationDocument } from '@ceramicnetwork/codecs';
-import { decode } from "codeco";
-
-const source = new EventSource('http://localhost:7007/api/v0/feed/aggregation/documents')
-const Codec = JsonAsString.pipe(AggregationDocument)
-
-source.addEventListener('message', (event) => {
- console.log('message', event)
- //use JsonAsString, and AggregationDocument to decode and use event.data
- const parsedData = decode(Codec, event.data);
- console.log('parsed', parsedData)
-})
-
-source.addEventListener('error', error => {
- console.log('error', error)
-})
-
-console.log('listening...')
-```
-
-## Resumability
-
-In case your application drops a connection and needs to start where it dropped, Data Feed API could be resumed. Every event emitted by the Data Feed API contains `resumeToken` property. When initiating a connection, you might ask to emit entries after `resumeToken`.
-
-For example, your application got an entry containing `resumeToken: "1714742204565000000"`. When connecting, pass the token value as a query parameter to emit the entries after this checkpoint:
-
-```javascript
-// ... same as a code snipped above
-const url = new URL("http://localhost:7007/api/v0/feed/aggregation/documents")
-url.searchParams.set('after', '1714742204565000000') // Value of the last resumeToken
-// Connection to http://localhost:7007/api/v0/feed/aggregation/documents?after=1714742204565000000
-const source = new EventSource(url)
-```
-
-
-## Frequently asked questions
-
-
- How to get the StreamId from the feed?
-
-
- The StreamId can be extracted from the CommitID included in the feed response as seen below:
- ```tsx
- ...
-
- source.addEventListener('message', (event) => {
- console.log('message', event)
- //use JsonAsString, and AggregationDocument to decode and use event.data
- const parsedData = decode(Codec, event.data)
- const streamId = parsedData.commitId.baseID
- console.log('parsed', parsedData)
- console.log('StreamID',streamId)
- })
- ...
- ```
-
-
-
-
-
- What are delivery guarantees of the Feed API?
-
-
- The feed sends data according to “at least once” guarantee. For every stream change, the latest stream state is delivered. For example, if a stream went through changes `a`, `b`, `c` giving states `A`, `B`, `C`, you could expect three events over Feed API: `C`, `C`, `C`.
-
-
-
-
-
- How far in the past could I resume from?
-
-
- You could expect up to 7 days worth of history stored.
-
-
-
-
-
diff --git a/docs/protocol/js-ceramic/networking/event-fetching.md b/docs/protocol/js-ceramic/networking/event-fetching.md
deleted file mode 100644
index ce7ce3cb..00000000
--- a/docs/protocol/js-ceramic/networking/event-fetching.md
+++ /dev/null
@@ -1,7 +0,0 @@
-# Event Fetching
-
-Once a tip is discovered through the [Tip Gossip](tip-queries.md) or [Tip Query](tip-queries.md) protocols a node knows both the StreamID and the latest event CID of the stream. The latest Event contains the CID of the `prev` Event and so on until the Init Event is found in the event log. The Init Event's CID is also in the StreamID. This is proof that the tip is part of the stream identified by the StreamId.
-
-The tip is one of [Init, Data, or Time Event](../streams/event-log.md). If the tip CID is the initial event CID then the stream has never been updated and the initial event is the complete event log. If the tip CID points to a Data, or Time event then that event will contain a `prev` field with a CID link to its previous event. IPFS can be used to retrieve this event. Similarly you can use IPFS to recursively fetch and resolve every `prev` event in an event log until reaching the initial event. At that point you have retrieved and synced the entire stream.
-
-Fetching an event with IPFS from a peer both relies on [IPFS BitSwap](https://docs.ipfs.tech/concepts/bitswap/) and the [IPFS DHT](https://docs.ipfs.tech/concepts/dht/).
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/networking/networking-index.md b/docs/protocol/js-ceramic/networking/networking-index.md
deleted file mode 100644
index 710e47e3..00000000
--- a/docs/protocol/js-ceramic/networking/networking-index.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# Networking
-
-Networking sub-protocols for Ceramic.
-
-### Overview
-
-Ceramic streams and nodes are grouped into independent networks. These networks can be either for public use or for use by a specific community. There are currently a few commonly shared and default networks. When a stream is published in a network, other nodes in the same network are able to query and discover the stream, receive the latest stream events (tips), and sync the entire event set for a stream. Each of the these network functions are defined by a sub protocol listed below.
-
-### [Networks](networks.md)
-
-Networks are collections of Ceramic [nodes](../nodes/overview.md) that share specific configurations and communicate over dedicated [libp2p](https://libp2p.io/) pubsub topics. They are easily identified by a path string, for example `/ceramic/mainnet` .
-
-### [Data Feed API](data-feed-api.md)
-
-The Ceramic Data Feed API gives developers a way to keep track of all the new state changes that are happening in the Ceramic network. This enables developers to customize the way their data is indexed and queried, and enables the development of new custom database products on top of Ceramic.
diff --git a/docs/protocol/js-ceramic/networking/networks.md b/docs/protocol/js-ceramic/networking/networks.md
deleted file mode 100644
index 95dc0359..00000000
--- a/docs/protocol/js-ceramic/networking/networks.md
+++ /dev/null
@@ -1,76 +0,0 @@
-# Networks
-
-Information about the default Ceramic networks
-
-## Overview
----
-
-Networks are collections of Ceramic [nodes](../nodes/overview.md) that share specific configurations and communicate over dedicated [libp2p](https://libp2p.io/) pubsub topics. Networks are disjoint from one another; streams that exist on one network are **not** discoverable or usable on another.
-
-These pubsub topics are used to relay all messages for the defined networking sub protocols.
-
-## All Networks
----
-
-An overview of the various Ceramic networks available today:
-
-| Name | Network ID | Ceramic Pubsub Topic | Timestamp Authority | Type |
-| --- | --- | --- | --- | --- |
-| Mainnet | mainnet | /ceramic/mainnet | Ethereum Mainnet (EIP155:1) | Public |
-| Clay Testnet | testnet-clay | /ceramic/testnet-clay | Ethereum Gnosis Chain | Public |
-| Dev Unstable | dev-unstable | /ceramic/dev-unstable | Ethereum Goerli Testnet | Public |
-| Local | local | /ceramic/local-$(randomNumber) | Ethereum by Truffle Ganache | Private |
-| In-memory | inmemory | | None | Private |
-
-:::note
- There is currently a proposal to decompose each network into multiple pubsub topics for scalability, the pubsub topics will remain prefixed by the network identifier `/ceramic//` see [CIP-120](https://github.com/ceramicnetwork/CIP/blob/main/CIPs/cip-120.md)
-:::
-
-## Public networks
----
-
-Ceramic has three public networks that can be used when building applications:
-
-- Mainnet
-- Testnet Clay
-- Dev Unstable
-
-### Mainnet
-
-Mainnet is the main public network used for production deployments on Ceramic. Ceramic's mainnet nodes communicate over the dedicated `/ceramic/mainnet` libp2p pubsub topic and use Ethereum's mainnet blockchain (`EIP155:1`) for generating timestamps used in [time events](../streams/event-log.md) for streams.
-
-### Clay Testnet
-
-Clay Testnet is a public Ceramic network used by the community for application prototyping, development, and testing purposes. Ceramic core devs also use Clay for testing official protocol release candidates. While we aim to maintain a high level of quality on the Clay testnet that mirrors the expectations of Mainnet as closely as possible, ultimately the reliability, performance, and stability guarantees of the Clay network are lower than that of Mainnet. Because of this, **the Clay network should not be used for applications in production**.
-
-Clay nodes communicate over the dedicated `/ceramic/testnet-clay` libp2p pubsub topic and use Ethereum's Gnosis blockchain for generating timestamps used in [time events](../streams/event-log.md) for streams.
-
-### Dev Unstable
-
-Dev Unstable is a public Ceramic network used by Ceramic core protocol developers for testing new protocol features and the most recent commits on the develop branch of `js-ceramic`. It should be considered **unstable and highly experimental**; only use this network if you want to test the most cutting edge features, but expect issues.
-
-Dev Unstable nodes communicate over the dedicated `/ceramic/dev-unstable` libp2p pubsub topic and use Ethereum's Goerli testnet blockchains for generating timestamps used in [time events](../streams/event-log.md) for streams.
-
-## Private Networks
----
-
-You can prototype applications on Ceramic by running the protocol in a local environment completely disconnected from other public nodes. Here "private" indicates that it is independent of the mainnet network, but does **not** imply any confidentiality guarantees. This is still public data.
-
-### Local
-
-Local is a private test network used for the local development of Ceramic applications. Nodes connected to the same local network communicate over a randomly-generated libp2p topic `/ceramic/local-$(randomNumber)` and use a local Ethereum blockchain provided by Truffle's [Ganache](https://trufflesuite.com/ganache/) for generating timestamps used in [time events](../streams/event-log.md) for streams.
-
-## Examples
----
-
-### TypeScript Definitions
-
-```tsx
-enum Networks {
- MAINNET = 'mainnet', // The prod public network
- TESTNET_CLAY = 'testnet-clay', // Should act like mainnet to test apps
- DEV_UNSTABLE = 'dev-unstable', // May diverge from mainnet to test Ceramic
- LOCAL = 'local', // local development and testing
- INMEMORY = 'inmemory', // local development and testing
-}
-```
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/networking/tip-gossip.md b/docs/protocol/js-ceramic/networking/tip-gossip.md
deleted file mode 100644
index 02ac94fb..00000000
--- a/docs/protocol/js-ceramic/networking/tip-gossip.md
+++ /dev/null
@@ -1,63 +0,0 @@
-# Tip Gossip
-
-When a stream is updated, the latest event (tip) is gossiped and propagated out to all the nodes in a network that are interested in that particular stream. Additionally, listening for all tips, allows a node to learn about streams it did not know about. This allows all interested nodes in the network to quickly get the latest update and state for a stream.
-
-## Protocol
----
-
-### Publishing Updates
-
-When an event is created and appended to a stream, the node will publish an update message to the network. All messages are broadcast on the [libp2p pubsub](https://github.com/libp2p/specs/tree/master/pubsub) topic for the [network](networks.md) this node is configured for. Any other node listening on this network will receive the update and then can decide to take any further action or discard.
-
-### Update Messages
-
-```tsx
-type UpdateMessage = {
- typ: MsgType.UPDATE //0
- stream: StreamID
- tip: CID
- model?: StreamID
-}
-```
-
-Where:
-
-- **`typ`** - the message is an update message, enum `0`
-- **`stream`** - streamId of the stream which this update is for
-- **`tip`** - CID of the latest event (tip) of the stream, the update
-- **`model`** - streamId of the ComposeDB data model that the stream being updated belongs to (optional)
-
-### Replicating Updates
-
-Any nodes that have received an update message and are interested in that stream can now save the tip (update). Any node that has saved this update can now answer [tip queries](tip-queries.md) for this stream. As long as there is at least one node in the network with this information (tip) saved, the publishing node can go down without effecting the availability of the stream.
-
-## Examples
----
-
-### TypeScript Definitions
-
-```tsx
-/**
- * Ceramic Pub/Sub message type.
- */
-enum MsgType {
- UPDATE = 0,
- QUERY = 1,
- RESPONSE = 2,
- KEEPALIVE = 3,
-}
-
-type UpdateMessage = {
- typ: MsgType.UPDATE
- stream: StreamID
- tip: CID // the CID of the latest commit
- model?: StreamID // optional
-}
-
-// All nodes will always ignore this message
-type KeepaliveMessage = {
- typ: MsgType.KEEPALIVE
- ts: number // current time in milliseconds since epoch
- ver: string // current ceramic version
-}
-```
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/networking/tip-queries.md b/docs/protocol/js-ceramic/networking/tip-queries.md
deleted file mode 100644
index a6057367..00000000
--- a/docs/protocol/js-ceramic/networking/tip-queries.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# Tip Queries
-
-Ceramic streams are identified by a [URI](../streams/uri-scheme) called StreamIds. Nodes that want to sync a stream need to query the network for the tip of that stream using its StreamId.
-
-!!!note
- Tips are the most recent Init, Data, or Time event for a given Stream Tip
-
-
-## Protocol
----
-
-A node resolving a Ceramic URI sends a query message to the network and then listens for responses with the candidates for the current tip of the stream. Any node that is interested in the same stream on the network and has stored its tips will respond with a response message. All messages are sent on the [libp2p pubsub](https://github.com/libp2p/specs/tree/master/pubsub) topic for the [network](networks.md) the node is configured for.
-
-### **Query Message**
-
-```tsx
-type QueryMessage = {
- typ: MsgType.QUERY // 1
- id: string
- stream: StreamID
-}
-```
-
-Where:
-
-- **`typ`** - the message is a query message, enum `1`
-- **`stream`** - the streamId that is being queried or resolved
-- **`id`** - a multihash `base64url.encode(sha265(dagCBOR({typ:1, stream: streamId})))`, can generally be treated as a random string that is used to pair queries to responses
-
-### **Response Message**
-
-```tsx
-type ResponseMessage = {
- typ: MsgType.RESPONSE // 2
- id: string
- tips: Map
-}
-```
-
-Where:
-
-- **`typ`** - the message is a response message, enum `2`
-- **`id`** - id of the query that this message is a response to
-- **`tips`** - map of `StreamID` to CID of stream tip
-
-:::note
- Currently this will only ever have a single `StreamID` in the query, but Ceramic will likely have batch queries at some point in the future.
-:::
-
-## Examples
----
-
-### TypeScript Definitions
-
-```tsx
-enum MsgType { // Ceramic Pub/Sub message type.
- UPDATE = 0,
- QUERY = 1,
- RESPONSE = 2,
- KEEPALIVE = 3,
-}
-
-type QueryMessage = {
- typ: MsgType.QUERY
- id: string
- stream: StreamID
-}
-
-type ResponseMessage = {
- typ: MsgType.RESPONSE
- id: string
- tips: Map
-}
-```
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/nodes/overview.md b/docs/protocol/js-ceramic/nodes/overview.md
deleted file mode 100644
index d9654cd9..00000000
--- a/docs/protocol/js-ceramic/nodes/overview.md
+++ /dev/null
@@ -1,40 +0,0 @@
-# Nodes Overview
----
-
-## Ceramic Nodes
-
-A Ceramic node is a bundle of services and long-lived processes that support the protocol and provide access to the Ceramic Network. Current implementations bundle and run most all of the following services and sub protocols defined here. This includes the following:
-
-
-### **Ceramic Services**
-
-| Service | Description |
-| --- | --- |
-| StateStore | Tracks and stores the latest tips for pinned streams and caches stream state. |
-| Networking | Runs the stream query and update protocols on Gossipsub and manages peer connections. |
-| API | Provides HTTP API service for connected Ceramic clients to read, write and query streams. Additionally, some node management functions are included. |
-| Timestamping | Regularly publishes timestamp proofs and Ceramic time events for a given set of events. |
-
-:::note
-
- In the future, node implementations may only provide a subset of services to the network. For example, nodes may be optimized to provide only indexing, long term storage, client APIs etc.
-:::
-
-## Timestamp Nodes
-
----
-
-Timestamping nodes support a small but important subset of the Ceramic protocol. Timestamping is entirely described by [CAIP-168 IPLD Timestamp Proof](https://chainagnostic.org/CAIPs/caip-168) and Ceramic Time Events. Timestamp services aggregate events from streams to be timestamped, construct Merkle proofs, publish transactions and publish timestamp events to the Ceramic Network. Ceramic mainnet currently supports `f(bytes32)` timestamp transaction types on Ethereum mainnet. This transaction type is entirely described by the [`eip155` namespace](https://github.com/ChainAgnostic/namespaces/blob/main/eip155/caip168.md) for CAIP-168.
-
-## Implementations
-
----
-
-The following table includes active node implementations:
-
-| Node | Name | Language | Description | Status | Maintainer |
-| --- | --- | --- | --- | --- | --- |
-| Ceramic | [js-ceramic](https://github.com/ceramicnetwork/js-ceramic/) | JavaScript | Complete Ceramic implementation. Runs all Ceramic core services, and connects to an IPFS node for all IPFS, libp2p, IPLD services needed. | Production | 3Box Labs |
-| Timestamp | [ceramic-anchor-service](https://github.com/ceramicnetwork/ceramic-anchor-service) | JavaScript | Complete timestamp services. Supports f(bytes32) and raw transaction types for EVM (EIP-155) blockchains. | Production | 3Box Labs |
-
-Longterm Ceramic is targeting multiple implementations of the protocol to support general resilience, robustness and security. Want to work on a node implementation in a new language like Rust or Go? Get in touch on the Forum!
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/nodes/running-a-node.md b/docs/protocol/js-ceramic/nodes/running-a-node.md
deleted file mode 100644
index b882c836..00000000
--- a/docs/protocol/js-ceramic/nodes/running-a-node.md
+++ /dev/null
@@ -1,90 +0,0 @@
-# Running a Node
----
-This will help explain how to run a Ceramic Node and some other specifics that are recommended to make sure your node is running smoothly.
-
-## Installation
----
-
-### Install and Run the Ceramic CLI
-
-This can be installed from NPM and updated through NPM by using the following command:
-
-```bash
-npx @ceramicnetwork/cli daemon
-```
-
-:::note
-Make sure that you have `ceramic-one` binary running in the background. To set it up, follow the setup steps [here](../guides/ceramic-nodes/running-locally#setting-up-the-ceramic-one-component).
-:::
-
-
-This will install the CLI and start the daemon. This will allow all of the initial files to be created. This will successfully have a node running on the Clay TestNet.
-
-## Operations Considerations
----
-
-### Log Rotate
-
-As a node runs for sometime if you enable the log to files you will want to enable `logrotate` to ensure that your node does not overfill the hard drive. This can be done by following the following steps:
-
-1. Install `logrotate` using the following command:
-
-```bash
-sudo apt install logrotate
-```
-
-2. Create a file in `/etc/logrotate.d/ceramic` with the following contents:
-
-```bash
-/home/ubuntu/.ceramic/logs/*.log {
- daily
- missingok
- rotate 7
- compress
- delaycompress
- notifempty
- create 0640 ubuntu ubuntu
- sharedscripts
- postrotate
- systemctl restart ceramic
- endscript
-}
-```
-
-3. Enable and Start the `logrotate` service using the following commands:
-
-```bash
-sudo systemctl enable logrotate
-sudo systemctl start logrotate
-```
-
-### Monitoring
-
-It is strongly recommended to use your existing monitoring system to collect and process the [metrics offered by the node](../../../composedb/guides/composedb-server/server-configurations.mdx).
-
-
-#### Availability
-
-Check the `js-ceramic` service’s availability with the healthcheck endpoint
-
-```json
-curl http://localhost:7007/api/v0/node/healthcheck
-```
-
-Check the `ceramic-one` service’s availability with the liveness endpoint
-
-```json
-curl http://127.0.0.1:5101/ceramic/liveness
-```
-
-#### Metrics
-
-Both `ceramic-one` and `js-ceramic` have prometheus compatible endpoints available.
-
-`ceramic-one` is enabled by default
-
-```jsx
-curl http://127.0.0.1:9464/metrics # ceramic-one metrics
-```
-
-js-ceramic monitoring configuration is described [here](https://developers.ceramic.network/docs/composedb/guides/composedb-server/server-configurations#prometheus-endpoint0).
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/overview.md b/docs/protocol/js-ceramic/overview.md
deleted file mode 100644
index 3bc159ea..00000000
--- a/docs/protocol/js-ceramic/overview.md
+++ /dev/null
@@ -1,16 +0,0 @@
-# Ceramic Protocol
-
-Ceramic is a decentralized event streaming protocol that enables developers to build decentralized databases, distributed compute pipelines, and authenticated data feeds, etc. Ceramic nodes can subscribe to subsets of streams forgoing the need of a global network state. This makes Ceramic an eventually consistent system (as opposed to strongly consistent like L1 blockchains), enabling web scale applications to be built reliably.
-
-
-The latest release of Ceramic has introduced a new Rust-based implementation of Ceramic protocol which offers performance and stability improvements as well as a new data synchronisation protocol called Recon. Developers, building on Ceramic network will be using two main components:
-- `js-ceramic` component which provides the API interface for Ceramic applications
-- `ceramic-one` component which provides Ceramic data network access (contains the implementation of Recon protocol).
-
-
-
-
-
-
-The protocol doesn't prescribe how to interpret events found within streams; this is left to the applications consuming the streams. Some examples of this type of application are:
-- [ComposeDB](../../composedb/getting-started)
diff --git a/docs/protocol/js-ceramic/streams/consensus.md b/docs/protocol/js-ceramic/streams/consensus.md
deleted file mode 100644
index 762ccdc6..00000000
--- a/docs/protocol/js-ceramic/streams/consensus.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# Consensus
-
-## Consensus Model
-
----
-
-Event streams rely on a limited conflict resolution or consensus model. Global consensus and ordering is not needed for progress and most decisions are localized to the consuming party of a single event stream. Guarantees are limited, but if any two parties consume the same set of events for a stream, they will arrive at the same state.
-
-The underlying log structure of an event stream allows multiple parallel histories, or branches, to be created resulting in a tree structure. A log or valid event stream is a single tree path from a known "latest" event to the Init Event. Latest events are also referred to as stream "tips". Logs can have multiple tips when there are branches in the log, and the "tip" selection for the canonical log of a stream becomes a consensus problem.
-
-### Single stream consensus
-
-A tip and canonical log for a stream are selected by the following pseudo algorithm and rules:
-
-1. Given a set of tips, traverse each tree path from tip till a commonly shared Time Event or the Init Event.
-2. From the shared event, traverse each path in the opposite direction (towards tip) until a Time Event is found (or the end of the log is reached). This set of events are considered conflicting events.
-3. Given each Time Event, determine the blockheight for the transaction included in the timestamp proof. Select the path with lowest blockheight. If a single path is selected, exit with path and tip selected, otherwise continue. Most cases will terminate here, it will be rare to have the same blockheight.
-4. If multiple tips have the same blockheight, select the path with the greatest number of events from the last timestamp proof till tip. If single path selected, exit with path and tip selected, otherwise continue.
-5. If number of events is equal, chooses the event and path which has the smallest CID in binary format (an arbitrary but deterministic choice)
-
-### Cross stream ordering
-
-It is assumed all timestamp events in a network are committed to the same blockchain, as specified by the `chainId` in the timestamp event. The main Ceramic network commits timestamp proofs to the Ethereum blockchain.
-
-The addition of timestamp events in streams gives some notion of relative global time for all events time-stamped on the same blockchain. This allows events across different streams to be globally ordered if a higher-level protocol requires it. Ceramic events can also be ordered against transactions and state on the blockchain in which it is timestamped. On most secure blockchains you can also reference wall clock time within some reasonable bounds and order events both in and out of the system based on that.
-
-## Risks
-
----
-
-### Late Publishing
-
-Without any global consensus guarantees, all streams and their potential tips are not known by all participants at any point in time. There may be partitions in the networks, existence of local networks, or individual participants may choose to intentionally withhold some events while publishing others. Selective publishing like this may or may not be malicious depending on the context in which the stream is consumed.
-
-Consider the following example: A user creates a stream, makes two conflicting updates and timestamps one of them earlier than the other, but only publishes the data of the update that was timestamped later. Now subsequent updates to the stream will be made on top of the second, published update. Every observer will accept these updates as valid since they have not seen the first update. However if the user later publishes the data of the earlier update, the stream will fork back to this update and all of the other updates made to the stream will be invalidated.
-
-Most of the time, the potential of an intentional late publishing attack isn't a concern in practice, as streams in Ceramic are generally controlled by a single user, and there's no incentive to attack one's own streams. This would become more of a concern, however, in streams with more sophisticated access control that allowed multiple end users to write into the same stream. In that case, all users of the stream would need to trust all the other users who have - or have ever had - write access to the stream to not be secretly sitting on timestamped writes that they haven't yet published, or else risk those writes being revealed later on and causing the stream to lose all writes that have occurred since the previously secret write was created.
-
-Additionally, note that late publishing may also be used as a deterrent to selling user identities. An identity or account buyer can't know that the seller is not keeping secret events that they will publish after the identity was sold.
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/streams/event-log.md b/docs/protocol/js-ceramic/streams/event-log.md
deleted file mode 100644
index f8b04caf..00000000
--- a/docs/protocol/js-ceramic/streams/event-log.md
+++ /dev/null
@@ -1,139 +0,0 @@
-# Event Log
-
----
-
-The core data structure in the Ceramic protocol is a self-certifying event log. It combines IPLD for hash linked data and cryptographic proofs to create an authenticated and immutable log. This event log can be used to model mutable databases and other data structures on top.
-
-## Introduction
-
----
-
-Append-only logs are frequently used as an underlying immutable data structure in distributed systems to improve data integrity, consistency, performance, history, etc. Open distributed systems use hash linked lists/logs to allow others to verify the integrity of any data. IPLD provides a natural way to define an immutable append-only log.
-
-- **Web3 authentication** - When combined with cryptographic signatures and blockchain timestamping, it allows authenticated writes to these logs using blockchain accounts and DIDs
-- **Low cost decentralization** - Providing a common database layer for users and applications besides more expensive on-chain data or centralized and siloed databases
-- **Interoperability, flexibility, composability** - A minimally defined log structure allows a base level of interoperability while allowing diverse implementations of mutable databases and data structures on top. Base levels of interoperability include log transport, update syncing, consensus, etc.
-
-## Events
-
----
-
-Logs are made up of events. An init event originates a new log and is used to reference or name a log. The name of a stream is referred to as a [StreamId](uri-scheme.md#streamid). Every additional "update" is appended as a data event. Periodically, time events are added after one or more data events. Time events allow you to prove an event was published at or before some point in time using blockchain timestamping. They can also be used for ordering events within streams and for global ordering across streams and blockchain events. The minimal definition of a log is provided here, additional parameters in both the headers and body are defined at application level or by higher level protocols.
-
-Data events (and often Init Events) are signed DAGJWS and encoded in IPLD using the [DAG-JOSE codec](https://ipld.io/specs/codecs/dag-jose/spec/). Event payloads are typically encoded as DAG-CBOR, but could be encoded with any codec supported by a node or the network. Formats and types are described using [IPLD schema language](https://ipld.io/docs/schemas/) and event encoding is further described below.
-
-### Init Event
-
-A log is initialized with an init event. The CID of this event is used to reference or name this log in a [StreamId](uri-scheme.md#streamid). An init event may be signed or unsigned.
-
-```bash
-
-type InitHeader struct {
- controllers [String]
-}
-type InitPayload struct {
- header InitHeader
- data optional Any
-}
-
-type InitJWS struct { // This is a DagJWS
- payload String
- signatures [Signature]
- link: &InitPayload
-}
-
-type InitEvent InitPayload | InitJWS
-
-```
-
-#### Parameters defined as follows:
-
-- **`controllers`** - an array of DID strings that defines which DIDs can write events to the log, when using CACAO, the DID is expected to be the issuer of the CACAO. Note that currently only a single DID is supported.
-- **`data`** - data is anything, if defined the Init Event must match the InitJWS struct or envelope and be encoded in DAG-JOSE, otherwise the InitPayload would be a valid init event alone and encoded in DAG-CBOR
-
-### Data Event
-
-Log updates are data events. Data events are appended in the log to an init event, prior data events or a time event. A data event MUST be signed.
-
-```bash
-type Event InitEvent | DataEvent | TimeEvent
-
-type DataHeader struct {
- controllers optional [String]
-}
-
-type DataEventPayload struct {
- id &InitEvent
- prev &Event
- header optional DataHeader
- data Any
-}
-
-type DataEvent struct { // This is a DagJWS
- payload String
- signatures [Signature]
- link: &DataEventPayload
-}
-```
-
-Additional parameters defined as follows, controllers and data defined same as above.
-
-- **`id`** - CID (Link) to the init event of the log
-- **`prev`** - CID (Link) to prior event in log
-- **`header`** - Optional header, included here only if changing header parameter value (controllers) from prior event. Other header values may be included outside this specification.
-
-This being a minimally defined log on IPLD, later specifications or protocols can add additional parameters to both init and data events and their headers as needed.
-
-### Time Event
-
-Time events can be appended to init events, and 1 or more data events. Reference [CAIP-168 IPLD Timestamp Proof](https://chainagnostic.org/CAIPs/caip-168) specification for more details on creating and verifying time events. Time Events are a simple extension of the IPLD Timestamp Proof specification, where `prev` points to the prior event in the log and is expected to be the data for which the timestamp proof is for. A timestamp event is unsigned.
-
-```bash
-type TimeEvent struct {
- id &InitEvent
- prev &DataEvent | &InitEvent
- proof Link
- path String
-}
-```
-
-## Verification
-
----
-
-A valid log is one that includes data events as defined above and traversing the log resolves to an originating init event as defined above. Each event is valid when it includes the required parameters above and the DAGJWS signature is valid for the given event `controller` DID and valid as defined below. Time events are defined as valid by CAIP-168. There will likely be additional verification steps specific to any protocol or application level definition.
-
-## Encoding
-
----
-
-### JWS & DAG-JOSE
-
-All signed events are encoded in IPLD using [DAG-JOSE](https://ipld.io/specs/codecs/dag-jose/spec/). DAG-JOSE is a codec and standard for encoding JOSE objects in IPLD. JOSE includes both[JWS](https://datatracker.ietf.org/doc/rfc7515/?include_text=1) for signed JSON objects and [JWE](https://datatracker.ietf.org/doc/rfc7516/?include_text=1) for encrypted JSON objects. JWS is used for events here and is commonly used standard for signed data payloads. Some parameters are further defined for streams. The original DAG-JOSE specification can be found [here](https://ipld.io/specs/codecs/dag-jose/spec/).
-
-The following defines a general signed event, both init and data events are more specifically defined above.
-
-```bash
-type Signature struct {
- header optional { String : Any }
- // The base64url encoded protected header, contains:
- // `kid` - the DID URL used to sign the JWS
- // `cap` - IPFS url of the CACAO used (optional)
- protected optional String
- signature String
-}
-
-type EventJWS struct {
- payload String
- signatures [Signature]
- link: &Event
-}
-```
-
-Where:
-
-- **`link`** - CID (Link) to the event for which this signature is over. Provided for easy application access and IPLD traversal, expected to match CID encoded in payload
-- **`payload`** - base64url encoded CID link to the event (JWS payload) for which this signature is over
-- **`protected`** - base64 encoded JWS protected header
-- **`header`** - base64 encoded JWS header
-- **`signature`** - base64 encoded JWS signature
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/streams/lifecycle.md b/docs/protocol/js-ceramic/streams/lifecycle.md
deleted file mode 100644
index eebad31d..00000000
--- a/docs/protocol/js-ceramic/streams/lifecycle.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# Stream Lifecycle
-
-## Write Lifecycle
-
----
-
-### Create
-
-A stream is created when an [Init Event](event-log.md) is created and published. The stream is then uniquely referenced in the network by its [StreamId](uri-scheme.md), which is derived from this Init Event.
-
-### Update
-
-Updates to a stream include the creating and publishing of data events or timestamp events. When creating these events they must reference the latest event or tip in the stream. The latest event, if there is multiple, is determined by locally following the conflict resolution and [consensus rules](consensus.md). The current update protocol is described further [here](../networking/tip-gossip.md).
-
-The data event is a signed event and is expected to be created and published by the controller of the given stream it is being appended. A timestamp event on the other hand can be created by any participant in network, given that it is a valid timestamp proof. Typically in the Ceramic network they will be created and published by a timestamping service.
-
-## Read Lifecycle
-
----
-
-### Query
-
-The network can be queried to discover the latest tips for any stream by StreamId. Knowing both the StreamId and tip then allows any node to sync the stream. Query requests are broadcast to the entire network to discover peers that have tips for any given stream. Future query protocols can be optimized and include other stream attributes and values to discover streams and stream tips. The current query protocol is described further [here](../networking/tip-queries.md).
-
-### Sync
-
-Streams can be synced and loaded by knowing both the StreamId and the latest event (tip). Given the latest tip you can traverse the stream event log from event to event in order until the Init Event is reached. Each event is loaded from peers in the network, any peer with a tip is expected to have the entirety of the event stream log. The current sync protocol is described further [here](../networking/event-fetching.md).
-
-## Durability
-
----
-
-### Maintenance
-A stream is a set of [events](event-log.md) and these events are stored in IPFS nodes. As long as the entire set of events is pinned and advertised on the IPFS DHT, the respective stream will be retrievable. If your application depends on a stream remaining available, it is your application's responsibility to maintain and store all of its events. This can be done by running your own IPFS nodes or by using an IPFS pinning service. Typically you will be running an IPFS node with Ceramic.
-
-If any events are not available at a given time, it is not a guarantee that the stream has been deleted. A node with a copy of those events
-may be temporarily offline and may return at some future time.
-
-Other nodes in the network can pin (maintain and store) events from your streams or anyone else's streams. If you suffer a data loss, some other node MAY have preserved your data. Popular streams and their events are likely to be stored on many nodes in the network.
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/streams/streams-index.md b/docs/protocol/js-ceramic/streams/streams-index.md
deleted file mode 100644
index 1a6e1284..00000000
--- a/docs/protocol/js-ceramic/streams/streams-index.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# Streams
-
-Data structures core to Ceramic
-
-### Overview
-
-Streams are a core concept in Ceramic, they include a primary data structure called an event log, a URI scheme to identify unique streams in a network, a simple consensus model to agree on the same event log across the network, and a supporting lifecycle of creating, updating, querying, and syncing streams.
-
-### [Event Log](event-log.md)
-
-The core data structure of streams is a self-certifying event log. It combines IPLD for hash linked data and cryptographic proofs to create an authenticated and immutable log. This event log can be used to model mutable databases and other data structures on top.
-
-### [URI Scheme](uri-scheme.md)
-
-A URI scheme is used to reference unique streams and unique events included in streams. They use a self describing format that allows anyone to parse and consume a stream correctly, while also easily supporting future changes and new types.
-
-### [Consensus](consensus.md)
-
-An event log or stream can end up with multiple branches or tips across nodes in the network. Different branches will result in differing stream state. A simple consensus model is used to allow all nodes whom consume the same set of events to eventually agree on a single log or state.
-
-### [Stream Lifecycle](lifecycle.md)
-
-A stream write lifecycle includes its creation and updates, otherwise know as events. A stream read lifecycle includes queries and syncing.
\ No newline at end of file
diff --git a/docs/protocol/js-ceramic/streams/uri-scheme.md b/docs/protocol/js-ceramic/streams/uri-scheme.md
deleted file mode 100644
index 3df47796..00000000
--- a/docs/protocol/js-ceramic/streams/uri-scheme.md
+++ /dev/null
@@ -1,90 +0,0 @@
-# URI Scheme
-
----
-
-## Stream URL
-
----
-
-Each stream in Ceramic is identified by a unique URL. This URL is comprised of a protocol identifier for Ceramic and a StreamId as defined below.
-
-When encoded as a string the StreamID is prepended with the protocol handler and StreamID is typically encoded using `base36`. This fully describes which stream and where it is located, in this case it can be found on the Ceramic Network.
-
-
-```bash
-ceramic://
-```
-
-For example, a StreamId may look as follows:
-
-```bash
-ceramic://kjzl6fddub9hxf2q312a5qjt9ra3oyzb7lthsrtwhne0wu54iuvj852bw9wxfvs
-```
-
-EventIds can also be encoded in the same way.
-
-```bash
-ceramic://
-```
-
-## StreamId
-
----
-
-A StreamId is composed of a StreamId code, a stream type, and a CID. It is used to reference a specific and unique event stream. StreamIds are similar to CIDs in IPLD, and use multiformats, but they provide additional information specific to Ceramic event streams. This also allows them to be distinguished from CIDs. The *init event* of an event stream is used to create the StreamId.
-
-StreamIds are defined as:
-
-```bash
- ::=
-
-# e.g. using CIDv1
- ::=
-```
-
-Where:
-
-- **``** is a [multibase](https://github.com/multiformats/multibase) code (1 or 2 bytes), to ease encoding StreamIds into various bases.
-:::note
- Binary (not text-based) protocols and formats may omit the multibase prefix when the encoding is unambiguous.
-:::
-- **``** `0xce` is a [multicodec](https://github.com/multiformats/multicodec) used to indicate that it's a [StreamId](https://github.com/multiformats/multicodec/blob/master/table.csv#L78), encoded as a varint
-- **``** is a [varint](https://github.com/multiformats/unsigned-varint) representing the stream type of the stream.
-- **``** is the bytes from the [CID](https://github.com/multiformats/cid) of the `init event`, stripped of the multibase prefix.
-
-The multicodec for StreamID is [`0xce`](https://github.com/multiformats/multicodec/blob/master/table.csv#L78). For compatibility with browser urls it's recommended to encode the StreamId using [[`base36`]](https://github.com/multiformats/multibase).
-
-The stream type value does not currently have any functionality at the protocol level. Rather, it is used by applications building on top of Ceramic (e.g. ComposeDB) to distinguish between different logic that is applied when processing events. Stream Type values have to be registered in the table of [CIP-59](https://github.com/ceramicnetwork/CIP/blob/main/CIPs/CIP-59/CIP-59.md#registered-values).
-
-## EventId
-
----
-
-EventIds extend StreamIds to reference a specific event in a specific stream. Additional bytes are added to the end of a StreamId. If it represents the genesis event the zero byte is added (`0x00`) otherwise the CID that represents the event is added.
-
-EventIds are defined as
-
-```bash
- ::=
-
-
-
-# e.g. using CIDv1 and representing the genesis event
- ::=
-
- <0x00>
-
-# e.g. using CIDv1 and representing an arbitrary event in the log
- ::=
-
-
-
-```
-
-Where:
-
-- **``** is either the zero byte (`0x00`) or [CID](https://github.com/multiformats/cid) bytes.
-
-### Stream Versions
-
-Each EventId can also be considered a reference to a specific version of a stream. At any EventId, a stream can be loaded up until that event and the resulting set of events are considered the version of that stream.
\ No newline at end of file
diff --git a/docs/wheel/wheel-reference.mdx b/docs/wheel/wheel-reference.mdx
deleted file mode 100644
index f3c7c4cc..00000000
--- a/docs/wheel/wheel-reference.mdx
+++ /dev/null
@@ -1,152 +0,0 @@
-import Tabs from '@theme/Tabs'
-import TabItem from '@theme/TabItem'
-
-# Wheel reference
-
-This reference explains Wheel prompt options and covers Ceramic configurations in more detail.
-
-## Wheel prompt reference
-
-With Wheel, you can fully customize your working directory. Below you can find a prompt reference
-covering each step of the Wheel prompt.
-
-### Project Type
-
-Your project type based on the project development stage. You can choose one of
-the following options:
-
-- `InMemory` - recommended project type for developers who are new to Ceramic and ComposeDB.
- It’s the best option for projects in an early prototyping stage and getting familiar with Ceramic stack.
- This option runs all of the processes in-memory and doesn’t require you to configure Ceramic Anchor Service.
- This also means that the data generated for your project will not be anchored on a blockchain and will be lost
- once you close your terminal.
-- `Dev` - or projects in early testing/development stage. This is a recommended option for projects in an early
- ideation stage or testing. Your node will connect to a dev-unstable network which is a Ceramic network dedicated
- to testing. Important thing to remember about dev-unstable network is that the data stored on this network is wiped
- out periodically as part of regular housekeeping. This means that the data streams generated for your project can be lost.
-- `Clay` - for projects in the active development stage. It’s a recommended option for projects that are past the ideation
- stage. Your node will connect to the clay-testnet network and anchor the data streams so that they are available for you
- project at any point of the development. Clay testnet, just like dev-unstable network gets wiped out periodically for
- housekeeping reasons.
-- `Mainnet` - for projects in the production stage. This option will require you to do more advanced configurations for
- your working environment. Generally, this option is only recommended for generating a production configuration file to be
- used with a production deployment like [Kubernetes](../composedb/guides/composedb-server/running-in-the-cloud).
-
-### Project Name
-
-Set the name for your project. You can use a default option ceramic-test-app or type a custom one. This
-name will be used to create a local directory of your project.
-
-### Project Path
-
-Path to your project local directory. You can use the default suggested path or specify a custom one.
-
-### Include Ceramic
-
-An option to install Ceramic CLI and Ceramic dependencies in your working environment.
-Defaults to `Y` - yes. To skip Ceramic installation, type `n`.
-
-### Include ComposeDB
-
-An option to install ComposeDB CLI and dependencies in your working environment.
-Defaults to `Y` - yes. To skip ComposeDB installation, type `n`.
-
-### Include ComposeDB Sample Application?
-
-An option to include and set up an example web3 social application built using ComposeDB
-on Ceramic. This application can be used as an easy way to test ComposeDB features or use
-this project as a basis for a new unique application. Defaults to `n` - no. To opt-in, type `Y`.
-
-### Admin DID Configuration
-
-Indexing is one of the key features of ComposeDB. In order to notify the Ceramic node which models have to be indexed, the
-ComposeDB tools have to interact with the restricted Admin API. Calling the API requires an authenticated Decentralized
-Identifier (DID) to be provided in the node configuration file. You can choose from the following options:
-
-- Generate DID and Private Key - generate a new admin DID as well as a private key (recommended for all new projects)
-- Input From File - you will be given an option to input an existing private key as well as a corresponding admin DID
-
-### File to save DID private key to?
-
-An option to store your DID private key on a specified local file. You can use a default path,
-specify a custom one or skip this step if you don’t want to store a DID private key on a local file by pressing esc on your keyboard.
-
-### CAS URL
-
-`Dev`, `Clay` and `Mainnet` projects run a node that connects to CAS (Ceramic Anchor Service) to create anchor
-commits on the blockchain for the data streams generated for your project. You will be given an option to specify CAS URL - you can
-use the default suggestion (recommended in most of the cases) or specify a custom url if you run your own anchor service.
-
-### CAS Authentication
-
-In order to control the nodes connected to CAS (Ceramic Anchor Service), you will have to [configure the authentication](../composedb/guides/composedb-server/access-mainnet).
-This will allow you to set or revoke DIDs for your nodes. You can choose from the following options:
-
-- Email Based Authentication - an email authentication method. You will be asked to provide an email that will be used to provide you with an OTP code (a passcode) needed for the authentication.
-- IP Based Authentication (Deprecated) - currently deprecated authentication method. Not recommended for new Ceramic users.
-
-### Wheel config file location
-
-Specifies the path to the Wheel configuration file. This file contains all parameters set during the Wheel configuration process. You can use the default suggestion
-or set a custom one.
-
-### Configure Ceramic
-
-When installing Ceramic you can either go with default configurations (recommended if you are new to Ceramic) or you can configure a bunch of parameters for how your node is set up.
-You can choose one of the following options:
-
-- Skip: Use default configuration based on network
-- Advanced: Configure all ceramic options
-
-Check out Ceramic configurations section to learn the details about the parameters you can configure for Basic and Advanced options.
-
-### Would you like ceramic started as a daemon?
-
-An option to start your Ceramic daemon which will spin up the node using the Ceramic configuration you chose. Defaults to `Y` - yes. If you want to skip and run your node later, type `n`.
-
-## Ceramic configuration
-
-This section dives deeper into the Ceramic parameters you can configure when you choose `Advanced: Configure all ceramic options` option in your wheel prompt.
-
-### Bundled or Remote IPFS
-
-An option to define if IFPS runs in the same compute process as Ceramic. You have two options to choose from:
-
-- Remote - IPFS running in separate compute process; recommended for all Ceramic versions that use `ceramic-one`. This configuration requires an IPFS Hostname. Default value is `http://localhost:5101`
-- Bundled - IPFS running in same compute process as Ceramic; used only with older Ceramic versions that use Kubo.
-
-
-### State Store
-
-An option to choose where your data will be persisted. To run a Ceramic node in production, it is critical to persist the Ceramic state store
- and the [IPFS datastore](https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#datastorespec). You can choose from two options:
-
-- Local - Ceramic state store will be stored on your machine's filesystem. This is a good option for early development and prototyping.
- If you choose this option, you will be asked to provide a path to your preferred state store directory or you can go with a provided default option.
-- S3 - cloud statestore. It’s a recommended option for production use cases. This assumes that you have an S3 bucked already setup and can provide
- the path to your bucket running on the cloud.
-
-### Bind address
-
-Specifies the url for the Ceramic daemon. Defaults to `127.0.0.1`.
-
-### Bind port
-
-Specifies the port for Ceramic daemon. Defaults to `7071`.
-
-### Cors origins
-
-An option to define which domains are allowed to access the node using the http-api. Default option allows access to all domains.
-
-### Run as gateway?
-
-An option to run the node in a read-only mode. This option doesn’t support data mutations.
-
-### Indexing Database
-
-Indexing is one of the key features of ComposeDB on Ceramic. ComposeDB indexes data to improve the query performance. You can choose which database will be used to store indexed data:
-
-- Sqlite - simple [sqlite](https://sqlite.org/index.html) database running on your local machine. This option is very lightweight, doesn’t require advanced configurations and is
- recommended for projects in an early development stage. When choosing this option you will be asked to configure the sqlite database location - either use your current working directory or specify a custom one.
-- Postgres - a Postgres database running on your local machine. This option requires a little bit more configuration and is required for production use cases. When you choose this option, you will be asked to provide
- the Postgres Database connection string.
diff --git a/docusaurus.config.ts b/docusaurus.config.ts
index 7beefb23..c3431c7f 100644
--- a/docusaurus.config.ts
+++ b/docusaurus.config.ts
@@ -59,7 +59,7 @@ const config: Config = {
fromExtensions: ["html", "htm"],
redirects: [
{
- to: "/docs/composedb/guides/data-modeling",
+ to: "/docs/protocol/ceramic-one/",
from: "/docs/advanced/standards/data-models/"
},
{
@@ -122,19 +122,19 @@ const config: Config = {
]
},
{
- to: "/docs/protocol/js-ceramic/streams/consensus",
+ to: "/docs/protocol/ceramic-one/concepts",
from: ["/learn/advanced/consensus/", "/protocol/streams/consensus/"]
},
{
- to: "/docs/composedb/guides/data-modeling/model-catalog",
+ to: "/docs/protocol/ceramic-one/",
from: "/build/share/"
},
{
- to: "/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/pinning",
+ to: "/docs/protocol/ceramic-one/",
from: ["/build/javascript/pinning/", "/build/pinning/"]
},
{
- to: "/docs/composedb/examples",
+ to: "/docs/protocol/ceramic-one/",
from: [
"/try/projects/",
"/explore/sample-apps/",
@@ -171,11 +171,11 @@ const config: Config = {
]
},
{
- to: "/docs/protocol/js-ceramic/streams/event-log",
+ to: "/docs/protocol/ceramic-one/concepts",
from: "/protocol/streams/event-log/"
},
{
- to: "/docs/composedb/getting-started",
+ to: "/docs/protocol/ceramic-one/",
from: [
"/build/",
"/tools/overview/",
@@ -189,7 +189,7 @@ const config: Config = {
]
},
{
- to: "/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/key-did",
+ to: "/docs/dids/introduction",
from: [
"/reference/accounts/key-did/",
"/docs/advanced/standards/accounts/key-did/",
@@ -202,19 +202,19 @@ const config: Config = {
]
},
{
- to: "/docs/protocol/js-ceramic/streams/uri-scheme",
+ to: "/docs/protocol/ceramic-one/concepts",
from: ["/protocol/streams/uri-scheme/", "/protocol/networking/streams/uri-scheme"]
},
{
- to: "/docs/protocol/js-ceramic/guides/ceramic-clients/clients-overview",
+ to: "/docs/protocol/ceramic-one/usage/installation",
from: ["/build/clients/", "/clients/javascript/cli/", "/learn/clients/", "/reference/javascript/clients/"]
},
{
- to: "/docs/composedb/guides",
+ to: "/docs/protocol/ceramic-one/",
from: "/guides"
},
{
- to: "/docs/protocol/js-ceramic/accounts/decentralized-identifiers#supported-methods",
+ to: "/docs/dids/introduction",
from: [
"/reference/accounts/3id-did/",
"/docs/advanced/standards/accounts/nft-did/",
@@ -226,7 +226,7 @@ const config: Config = {
]
},
{
- to: "/docs/protocol/js-ceramic/networking/networking-index",
+ to: "/docs/protocol/ceramic-one/concepts",
from: ["/protocol/networking/"]
},
{
@@ -234,11 +234,11 @@ const config: Config = {
from: ["/explore/explorers/", "/try/explorers/"]
},
{
- to: "/docs/protocol/js-ceramic/overview",
+ to: "/docs/protocol/ceramic-one/",
from: ["/run/cas/cas/", "/run/"]
},
{
- to: "/docs/composedb/guides/data-modeling#models",
+ to: "/docs/protocol/ceramic-one/concepts",
from: [
"/tools/glaze/datamodel/",
"/tools/glaze/did-datastore/",
@@ -248,7 +248,7 @@ const config: Config = {
]
},
{
- to: "/docs/protocol/js-ceramic/overview",
+ to: "/docs/protocol/ceramic-one/",
from: ["/reference/javascript/blockchain/", "/build/javascript/writes/", "/build/writes/"]
},
{
@@ -256,7 +256,7 @@ const config: Config = {
from: ["/reference/accounts/did-session/"]
},
{
- to: "/docs/protocol/js-ceramic/guides/ceramic-clients/stream-api/caip10-link",
+ to: "/docs/protocol/ceramic-one/concepts",
from: [
"/reference/stream-programs/caip10-link/",
"/streamtypes/caip-10-link/overview",
@@ -264,11 +264,11 @@ const config: Config = {
]
},
{
- to: "/docs/protocol/js-ceramic/networking/event-fetching",
+ to: "/docs/protocol/ceramic-one/concepts",
from: ["/protocol/networking/event-fetching/"]
},
{
- to: "/docs/protocol/js-ceramic/streams/lifecycle",
+ to: "/docs/protocol/ceramic-one/concepts",
from: ["/protocol/streams/lifecycle/"]
},
{
@@ -276,23 +276,23 @@ const config: Config = {
from: ["/learn/blog/"]
},
{
- to: "/docs/protocol/js-ceramic/streams/streams-index",
+ to: "/docs/protocol/ceramic-one/concepts",
from: ["/protocol/streams/", "/streamtypes/overview/"]
},
{
- to: "/docs/protocol/js-ceramic/accounts/authorizations",
+ to: "/docs/dids/authorization",
from: "/protocol/accounts/authorizations/"
},
{
- to: "/docs/protocol/js-ceramic/networking/tip-gossip",
+ to: "/docs/protocol/ceramic-one/concepts",
from: "/protocol/networking/tip-gossip/"
},
{
- to: "/docs/protocol/js-ceramic/networking/tip-queries",
+ to: "/docs/protocol/ceramic-one/concepts",
from: "/protocol/networking/tip-queries/"
},
{
- to: "/docs/protocol/js-ceramic/accounts/object-capabilities",
+ to: "/docs/dids/authorization",
from: "/protocol/accounts/object-capabilities/"
},
{
@@ -300,7 +300,7 @@ const config: Config = {
from: "/docs/introduction/next-steps/"
},
{
- to: "/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/http-api",
+ to: "/docs/protocol/ceramic-one/usage/installation",
from: [
"/build/cli/api/",
"/reference/http-api/",
@@ -315,7 +315,7 @@ const config: Config = {
from: "/reference/typescript/variables/_ceramicnetwork_core.INDEXED_MODEL_CONFIG_TABLE_NAME.html"
},
{
- to: "/docs/protocol/js-ceramic/accounts/accounts-index",
+ to: "/docs/dids/introduction",
from: "/protocol/accounts/"
},
{
@@ -323,31 +323,31 @@ const config: Config = {
from: "/reference/typescript/interfaces/_ceramicnetwork_common.AnchorProof.html"
},
{
- to: "/docs/protocol/js-ceramic/guides/ceramic-clients/authentication/did-jsonrpc",
+ to: "/docs/dids/introduction",
from: "/reference/core-clients/did-jsonrpc/"
},
{
- to: "/docs/protocol/js-ceramic/nodes/overview",
+ to: "/docs/protocol/ceramic-one/",
from: ["/protocol/nodes", "/run/nodes/node-providers/", "/run/nodes/community-nodes/"]
},
{
- to: "/docs/protocol/js-ceramic/nodes/running-a-node",
+ to: "/docs/protocol/ceramic-one/usage/installation",
from: ["/run/nodes/nodes", "/run/nodes", "/run/nodes/available/"]
},
{
- to: "/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/queries",
+ to: "/docs/protocol/ceramic-one/usage/query",
from: "/build/queries"
},
{
- to: "/docs/protocol/js-ceramic/guides/ceramic-clients/javascript-clients/ceramic-http",
+ to: "/docs/protocol/ceramic-one/usage/installation",
from: "/build/javascript/http"
},
{
- to: "/docs/composedb/set-up-your-environment",
+ to: "/docs/protocol/ceramic-one/usage/installation",
from: ["/build/installation/", "/build/javascript/installation/", "/build/installation/Ceramic"]
},
{
- to: "/docs/composedb/interact-with-data#authentication",
+ to: "/docs/dids/authorization",
from: ["/build/authentication/"]
},
{
@@ -355,7 +355,7 @@ const config: Config = {
from: ["/build/the-ceramic-stack/", "/learn/overview/"]
},
{
- to: "/docs/protocol/js-ceramic/networking/networks",
+ to: "/docs/protocol/ceramic-one/",
from: ["/learn/networks/", "/learn/mainnet/"]
},
{
@@ -367,7 +367,7 @@ const config: Config = {
from: ["/learn/features/"]
},
{
- to: "/docs/protocol/js-ceramic/accounts/decentralized-identifiers#pkh-did",
+ to: "/docs/dids/introduction",
from: ["/docs/advanced/standards/accounts/pkh-did/"]
},
{
@@ -387,12 +387,27 @@ const config: Config = {
from: ["/reference/typescript/interfaces/_ceramicnetwork_common.pinapi-1.html"]
},
{
- to: "/docs/protocol/js-ceramic/nodes/overview",
- from: ["/docs/protocol/js-ceramic/nodes"]
+ to: "/docs/protocol/ceramic-one/",
+ from: ["/docs/protocol/js-ceramic/nodes", "/docs/protocol/js-ceramic/overview"]
},
{
to: "/docs/introduction/protocol-overview",
from: "/protocol"
+ },
+ {
+ to: "/docs/protocol/ceramic-one/",
+ from: [
+ "/docs/composedb/getting-started",
+ "/docs/composedb/create-ceramic-app",
+ "/docs/composedb/set-up-your-environment",
+ "/docs/composedb/create-your-composite",
+ "/docs/composedb/interact-with-data",
+ "/docs/composedb/core-concepts",
+ "/docs/composedb/next-steps",
+ "/docs/composedb/examples",
+ "/docs/composedb/guides",
+ "/docs/wheel/wheel-reference"
+ ]
}
]
}
@@ -417,36 +432,12 @@ const config: Config = {
label: "Introduction"
},
{
- label: "Developer Tools",
-
- items: [
- {
- to: "docs/composedb/getting-started",
- label: "ComposeDB"
- },
- {
- to: "docs/wheel/wheel-reference",
- label: "Wheel"
- },
- {
- to: "docs/dids/introduction",
- label: "Decentralized Identifiers"
- }
- ]
+ to: "docs/protocol/ceramic-one/",
+ label: "Ceramic One"
},
{
- label: "Protocol",
-
- items: [
- {
- to: "docs/protocol/js-ceramic/overview",
- label: "JS-Ceramic"
- },
- {
- to: "docs/protocol/ceramic-one/",
- label: "Ceramic One"
- }
- ]
+ to: "docs/dids/introduction",
+ label: "Decentralized Identifiers"
},
{
label: "Ecosystem",
@@ -488,12 +479,12 @@ const config: Config = {
to: "/docs/introduction/intro"
},
{
- label: "ComposeDB",
- to: "/docs/composedb/getting-started"
+ label: "Ceramic One",
+ to: "/docs/protocol/ceramic-one/"
},
{
- label: "Protocol",
- to: "/docs/protocol/js-ceramic/overview"
+ label: "Decentralized Identifiers",
+ to: "/docs/dids/introduction"
}
]
},
diff --git a/sidebars.ts b/sidebars.ts
index 95d13e1d..7acb468a 100644
--- a/sidebars.ts
+++ b/sidebars.ts
@@ -25,393 +25,43 @@ const sidebars: SidebarsConfig = {
},
items: [
{ type: "doc", id: "introduction/protocol-overview", label: "Ceramic Protocol" },
- { type: "doc", id: "introduction/composedb-overview", label: "ComposeDB" },
{ type: "doc", id: "introduction/did-overview", label: "Decentralized Identifiers" }
]
},
- { type: "doc", id: "introduction/technical-reqs", label: "Technical Requirements" },
- { type: "doc", id: "introduction/ceramic-roadmap", label: "Roadmap" }
+ { type: "doc", id: "introduction/technical-reqs", label: "Technical Requirements" }
],
- protocol: [
+ ceramicOne: [
{
type: "doc",
- id: "protocol/js-ceramic/overview",
- label: "Overview"
- },
- {
- type: "category",
- collapsed: true,
- label: "Guides",
- link: {
- type: "doc",
- id: "protocol/js-ceramic/guides/guides-index"
- },
- items: [
- {
- type: "category",
- collapsed: true,
- label: "Ceramic Nodes",
- items: [
- {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-nodes/running-locally",
- label: "Running Locally"
- },
- {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-nodes/running-cloud",
- label: "Running in the Cloud"
- }
- ]
- },
- {
- type: "category",
- collapsed: false,
- label: "Ceramic Clients",
- link: {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-clients/clients-overview"
- },
- items: [
- {
- type: "category",
- collapsed: true,
- label: "JavaScript Client",
- items: [
- {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-clients/javascript-clients/ceramic-http",
- label: "Basic Usage"
- },
- {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-clients/javascript-clients/http-api",
- label: "Ceramic HTTP API"
- },
- {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-clients/javascript-clients/queries",
- label: "Queries"
- }
- ]
- },
- {
- type: "category",
- collapsed: true,
- label: "Authentication",
- items: [
- {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-clients/authentication/did-jsonrpc",
- label: "Basic Usage"
- },
- {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-clients/authentication/key-did",
- label: "Key DID"
- },
- {
- type: "doc",
- id: "protocol/js-ceramic/guides/ceramic-clients/authentication/did-session",
- label: "DID Session"
- }
- ]
- }
- ]
- }
- ]
- },
- {
- type: "category",
- collapsed: false,
- label: "Streams",
- link: {
- type: "doc",
- id: "protocol/js-ceramic/streams/streams-index"
- },
- items: [
- { type: "doc", id: "protocol/js-ceramic/streams/event-log", label: "Event Log" },
- { type: "doc", id: "protocol/js-ceramic/streams/uri-scheme", label: "URI Scheme" },
- { type: "doc", id: "protocol/js-ceramic/streams/consensus", label: "Consensus" },
- { type: "doc", id: "protocol/js-ceramic/streams/lifecycle", label: "Lifecycle" }
- ]
- },
- {
- type: "category",
- collapsed: false,
- label: "Accounts",
- link: {
- type: "doc",
- id: "protocol/js-ceramic/accounts/accounts-index"
- },
- items: [
- {
- type: "doc",
- id: "protocol/js-ceramic/accounts/decentralized-identifiers",
- label: "Decentralized IDs"
- },
- { type: "doc", id: "protocol/js-ceramic/accounts/authorizations", label: "Authorizations" },
- {
- type: "doc",
- id: "protocol/js-ceramic/accounts/object-capabilities",
- label: "Object-Capabilities"
- }
- ]
- },
- {
- type: "category",
- collapsed: false,
- label: "Networking",
- link: {
- type: "doc",
- id: "protocol/js-ceramic/networking/networking-index"
- },
- items: [
- { type: "doc", id: "protocol/js-ceramic/networking/networks", label: "Networks" },
- {type: "doc", id: "protocol/js-ceramic/networking/data-feed-api", label: "Data Feed API" },
- ]
- },
- {
- type: "category",
- collapsed: false,
- label: "Nodes",
- link: {
- type: "doc",
- id: "protocol/js-ceramic/nodes/overview"
- },
- items: [
- { type: "doc", id: "protocol/js-ceramic/nodes/overview", label: "Overview" },
- { type: "doc", id: "protocol/js-ceramic/nodes/running-a-node", label: "Running a Node" }
- ]
+ id: "protocol/ceramic-one/README",
+ label: "Getting Started"
},
{
- type: "link",
- label: "API Reference",
- href: "https://developers.ceramic.network/reference/typescript/modules.html"
- }
- ],
- composedb: [
- {
- type: "category",
- collapsed: false,
- label: "Getting Started",
- link: {
- type: "doc",
- id: "composedb/getting-started"
- },
- items: [
- {
- type: "doc",
- id: "composedb/create-ceramic-app",
- label: "Scaffold a new Ceramic app"
- },
- {
- type: "doc",
- id: "composedb/set-up-your-environment",
- label: "Quickstart"
- },
- { type: "doc", id: "composedb/create-your-composite", label: "Create your composite" },
- { type: "doc", id: "composedb/interact-with-data", label: "Interact with data" },
- { type: "doc", id: "composedb/core-concepts", label: "Core ComposeDB concepts" },
- { type: "doc", id: "composedb/next-steps", label: "Next Steps" }
- ]
+ type: "doc",
+ id: "protocol/ceramic-one/concepts",
+ label: "Concepts"
},
{
type: "category",
collapsed: false,
- label: "Tutorials and Examples",
- link: {
- type: "doc",
- id: "composedb/examples/index"
- },
+ label: "Usage",
items: [
- {
- type: "doc",
- id: "composedb/examples/tutorials-and-examples",
- label: "Starter Apps and Tutorials"
- },
- {
- type: "doc",
- id: "composedb/examples/verifiable-credentials",
- label: "Verifiable Credentials"
- },
- {
- type: "doc",
- id: "composedb/examples/taco-access-control",
- label: "TACo with ComposeDB"
- }
+ "protocol/ceramic-one/usage/installation",
+ "protocol/ceramic-one/usage/produce",
+ "protocol/ceramic-one/usage/consume",
+ "protocol/ceramic-one/usage/query"
]
},
{
type: "category",
collapsed: false,
- label: "Guides",
- link: {
- type: "doc",
- id: "composedb/guides/index"
- },
+ label: "Self-Anchoring",
items: [
- {
- type: "category",
- collapsed: true,
- label: "Data Modeling",
- link: {
- type: "doc",
- id: "composedb/guides/data-modeling/data-modeling"
- },
- items: [
- {
- type: "doc",
- id: "composedb/guides/data-modeling/model-catalog",
- label: "Model Catalog"
- },
- {
- type: "category",
- collapsed: true,
- label: "Writing Models",
- link: {
- type: "doc",
- id: "composedb/guides/data-modeling/writing-models"
- },
- items: [
- {
- type: "doc",
- id: "composedb/guides/data-modeling/introduction-to-modeling",
- label: "Introduction to Modeling"
- },
- {
- type: "doc",
- id: "composedb/guides/data-modeling/schemas",
- label: "Schemas"
- },
- {
- type: "doc",
- id: "composedb/guides/data-modeling/relations",
- label: "Relations"
- }
- ]
- },
- {
- type: "doc",
- id: "composedb/guides/data-modeling/composites",
- label: "Composites"
- }
- ]
- },
- {
- type: "category",
- collapsed: true,
- label: "ComposeDB Client",
- link: {
- type: "doc",
- id: "composedb/guides/composedb-client/composedb-client"
- },
- items: [
- {
- type: "category",
- collapsed: true,
- label: "JavaScript Client",
- link: {
- type: "doc",
- id: "composedb/guides/composedb-client/javascript-client"
- },
- items: [
- {
- type: "doc",
- id: "composedb/guides/composedb-client/using-apollo",
- label: "Using Apollo"
- },
- {
- type: "doc",
- id: "composedb/guides/composedb-client/using-relay",
- label: "Using Relay"
- }
- ]
- },
- {
- type: "category",
- collapsed: true,
- label: "Authenticate Users",
- link: {
- type: "doc",
- id: "composedb/guides/composedb-client/authenticate-users"
- },
- items: [
- {
- type: "doc",
- id: "composedb/guides/composedb-client/user-sessions",
- label: "User Sessions"
- }
- ]
- }
- ]
- },
- {
- type: "category",
- collapsed: true,
- label: "ComposeDB Server",
- link: {
- type: "doc",
- id: "composedb/guides/composedb-server/composedb-server"
- },
- items: [
- {
- type: "doc",
- id: "composedb/guides/composedb-server/running-locally",
- label: "Running Locally"
- },
- {
- type: "doc",
- id: "composedb/guides/composedb-server/running-in-the-cloud",
- label: "Running in the Cloud"
- },
- {
- type: "doc",
- id: "composedb/guides/composedb-server/server-configurations",
- label: "Server Configurations"
- },
- {
- type: "doc",
- id: "composedb/guides/composedb-server/access-mainnet",
- label: "Access Mainnet"
- },
- {
- type: "doc",
- id: "composedb/guides/composedb-server/data-storage",
- label: "Data Storage"
- }
- ]
- },
- {
- type: "category",
- collapsed: true,
- label: "Data Interactions",
- link: {
- type: "doc",
- id: "composedb/guides/data-interactions/data-interactions"
- },
- items: [
- {
- type: "doc",
- id: "composedb/guides/data-interactions/queries",
- label: "Queries"
- },
- {
- type: "doc",
- id: "composedb/guides/data-interactions/mutations",
- label: "Mutations"
- }
- ]
- }
+ "protocol/ceramic-one/anchoring/overview",
+ "protocol/ceramic-one/anchoring/evm-configuration"
]
- },
- {
- type: "link",
- label: "ComposeDB API",
- href: "https://composedb.js.org/docs/0.6.x/category/public-apis"
}
],
- wheel: [{ type: "doc", id: "wheel/wheel-reference", label: "Wheel Reference" }],
dids: [
{ type: "doc", id: "dids/introduction", label: "Introduction" },
{
@@ -426,7 +76,6 @@ const sidebars: SidebarsConfig = {
label: "Guides",
items: [
"dids/guides/concepts-overview",
- "dids/guides/using-with-composedb-client",
"dids/guides/add-chain-support",
"dids/guides/upgrading-did-session"
]
@@ -439,43 +88,7 @@ const sidebars: SidebarsConfig = {
id: "ecosystem/community",
label: "Overview"
}
- ],
- ceramicOne: [
- {
- type: "doc",
- id: "protocol/ceramic-one/README",
- label: "Ceramic One",
- },
- {
- type: "doc",
- id: "protocol/ceramic-one/concepts",
- label: "Concepts",
- },
- {
- type: "category",
- collapsed: true,
- label: "Usage",
- items: [
- "protocol/ceramic-one/usage/installation",
- "protocol/ceramic-one/usage/produce",
- "protocol/ceramic-one/usage/consume",
- "protocol/ceramic-one/usage/query",
- ],
- }
- ],
-
- // But you can create a sidebar manually
- /*
- tutorialSidebar: [
- 'intro',
- 'hello',
- {
- type: 'category',
- label: 'Tutorial',
- items: ['tutorial-basics/create-a-document'],
- },
- ],
- */
+ ]
};
export default sidebars;
diff --git a/src/components/homepage/start-building.js b/src/components/homepage/start-building.js
index 99bca8ca..5ece3858 100644
--- a/src/components/homepage/start-building.js
+++ b/src/components/homepage/start-building.js
@@ -4,18 +4,18 @@ import styles from "./homeNavBoxes.module.css";
const FeatureList = [
{
- title: "Example App →",
+ title: "Getting Started →",
items: [
{
- url: "/docs/composedb/create-ceramic-app",
- text: "Setup a fully functioning Ceramic app by running one simple command."
+ url: "/docs/protocol/ceramic-one/",
+ text: "Learn how to install and run Ceramic One to build decentralized applications."
}
]
},
{
- title: "ComposeDB →",
+ title: "Ceramic SDK →",
items: [
- { url: "docs/composedb/getting-started", text: "Build composable dApps using a decentralised graph database." }
+ { url: "/docs/protocol/ceramic-one/usage/installation", text: "Use the Ceramic SDK to produce and consume events on the Ceramic network." }
]
}
];
diff --git a/src/components/homepage/tools-utilities.js b/src/components/homepage/tools-utilities.js
index f07cf191..d6a790d4 100644
--- a/src/components/homepage/tools-utilities.js
+++ b/src/components/homepage/tools-utilities.js
@@ -4,28 +4,28 @@ import styles from "./homeNavBoxes.module.css";
const FeatureList = [
{
- title: "Ceramic Protocol →",
+ title: "Ceramic One →",
items: [
{
- url: "docs/protocol/js-ceramic/overview",
- text: "Dive into the specifications and implementation of the Ceramic Protocol."
+ url: "/docs/protocol/ceramic-one/",
+ text: "Dive into the Rust implementation of the Ceramic protocol."
}
]
},
{
- title: "Data Feed API →",
- items: [{ url: "docs/protocol/js-ceramic/networking/data-feed-api", text: "Build custom indexes on Ceramic." }]
+ title: "Query Pipeline →",
+ items: [{ url: "/docs/protocol/ceramic-one/usage/query", text: "Query Ceramic data using Flight SQL." }]
},
{
title: "Decentralized Identifiers (DIDs) →",
- items: [{ url: "docs/dids/introduction", text: "Interact and manage decentralized identifiers." }]
+ items: [{ url: "/docs/dids/introduction", text: "Interact and manage decentralized identifiers." }]
},
{
- title: "Simple Deploy →",
+ title: "Self-Anchoring →",
items: [
{
- url: "docs/composedb/guides/composedb-server/running-in-the-cloud",
- text: "Easily run Ceramic Nodes in the Cloud."
+ url: "/docs/protocol/ceramic-one/anchoring/overview",
+ text: "Run your own anchor service on any EVM blockchain."
}
]
}
diff --git a/src/pages/index.js b/src/pages/index.js
index c6f5a606..a787c630 100644
--- a/src/pages/index.js
+++ b/src/pages/index.js
@@ -40,9 +40,9 @@ export default function Home() {
background: "linear-gradient(215deg, var(--ifm-color-primary) -33%, var(--box-color) 50%)"
}}
>
-
Build with ComposeDB
+
Build with Ceramic One
- A decentralized, composable graph database to build interoperable applications on Ceramic.
+ The next-generation Ceramic node in Rust. Build scalable, decentralized applications with verifiable, composable data.