If you already use the Humm API, the simplest way to manage business context programmatically is to work directly with the existing Memories, Ontology, and Commands endpoints. This is the recommended path if you want to:Documentation Index
Fetch the complete documentation index at: https://heyhumm.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
- Sync context from your own systems
- Let an AI assistant help draft updates
- Build repeatable internal workflows without waiting on new product features
API vs. MCP
Use the API when you want to change business context. Use the Humm MCP Server when you want to query Humm from an AI assistant.Use the API
Read, diff, approve, and apply updates to memories, ontology, and commands.
Use MCP
Explore data sources, search context, and run Humm queries from Claude, ChatGPT, or another MCP client.
Recommended Workflow
For all context changes, follow the same pattern:- Fetch the current resource
- Draft the proposed change locally
- Show a diff or summary for human review
- Apply the write only after approval
- Re-fetch the resource and verify the result
Humm recommends a preview-and-approve step before any write, even if you automate the rest of the workflow.
Guides
Using an AI Assistant
You do not need a Humm-specific plugin for these workflows. Give your assistant:- Your API base URLs
- A bearer token or PAT
- The workflow you want it to follow
- A requirement to preview changes before applying them
If your assistant runs shell commands for you, make sure your token is available
to non-interactive shells. For example, export
HUMM_PAT from ~/.zshenv for
zsh or ~/.bash_profile for bash, rather than only from an interactive shell
profile.