Starts an AI analysis in the background and returns immediately with thread metadata. This endpoint enables polling-based UI updates by starting agent execution in the background and returning thread information immediately.
Supports model configuration via the llm_config parameter (defaults to “deep” if not provided).
Documentation Index
Fetch the complete documentation index at: https://heyhumm.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Override user ID for system operations (optional)
Request model for async chat endpoint.
Configuration for model selection and reasoning effort.
monolith, sandbox, deepagents typed, suggestion Successful Response