Skip to main content
Shelfforce is designed for machine-to-machine integration. Whether you are building an AI agent with Claude, GPT, or an open-source model, Shelfforce provides everything needed for automatic tool discovery, reliable execution, and event-driven workflows.

Discovery

AI agents need to understand what tools are available. Shelfforce provides two discovery mechanisms:

OpenAPI specification

The full OpenAPI 3.1 specification is available at:
https://docs.shelfforce.ai/openapi.json
Use this spec for automatic tool generation in frameworks like LangChain, CrewAI, AutoGen, or the OpenAI/Anthropic function calling APIs. The spec includes all endpoints, request/response schemas, authentication requirements, and example values.

llms.txt

For LLM context injection, Shelfforce publishes an llms.txt file:
https://shelfforce.ai/llms.txt
This file describes Shelfforce capabilities, endpoints, and usage patterns in natural language — optimized for LLM comprehension. Include it in your agent’s system prompt or retrieval context to help the model understand how to use the API. A more detailed version is available at:
https://shelfforce.ai/llms-full.txt
The standard workflow for an AI agent integrating with Shelfforce:
1

Analyze

Submit a shelf image for analysis.
POST /api/v1/analyses
{ "imageUrl": "https://...", "metadata": { "source": "agent" } }
2

Poll or listen

Either poll GET /api/v1/analyses/{id} until status is completed, or configure a webhook for analysis.completed.
3

Extract

Read the product data from the completed analysis response. Parse brand names, facing counts, prices, and promotional status.
4

Report

Query share-of-shelf and store performance reports for aggregated insights.
GET /api/v1/reports/share-of-shelf?from=2026-02-01&to=2026-02-23
5

Act

Based on the results, the agent can create tasks, flag compliance issues, or feed data into downstream systems.

Function calling schema

Here is an example tool definition for Claude or GPT function calling that covers the core analysis workflow:
{
  "name": "analyze_shelf",
  "description": "Submit a retail shelf image for analysis. Returns detected products with brand, name, facings, price, and share-of-shelf data. Costs 1 credit per image. Results take 10-30 seconds.",
  "input_schema": {
    "type": "object",
    "properties": {
      "imageUrl": {
        "type": "string",
        "description": "Publicly accessible URL of a retail shelf image (JPEG, PNG, or WebP)."
      },
      "metadata": {
        "type": "object",
        "description": "Optional key-value pairs to attach to the analysis (e.g., store name, aisle).",
        "additionalProperties": { "type": "string" }
      }
    },
    "required": ["imageUrl"]
  }
}
Additional tool definitions your agent may need:
[
  {
    "name": "get_analysis",
    "description": "Get the status and results of a shelf analysis by ID. Returns products array when status is 'completed'.",
    "input_schema": {
      "type": "object",
      "properties": {
        "analysisId": {
          "type": "string",
          "description": "The analysis ID returned from analyze_shelf."
        }
      },
      "required": ["analysisId"]
    }
  },
  {
    "name": "get_share_of_shelf",
    "description": "Get aggregated share-of-shelf report for a date range. Returns brand-level facing percentages.",
    "input_schema": {
      "type": "object",
      "properties": {
        "from": {
          "type": "string",
          "description": "Start date in ISO 8601 format."
        },
        "to": {
          "type": "string",
          "description": "End date in ISO 8601 format."
        }
      },
      "required": ["from", "to"]
    }
  },
  {
    "name": "create_task",
    "description": "Create a field task for a store visit. Assign to a place with optional due date and instructions.",
    "input_schema": {
      "type": "object",
      "properties": {
        "title": { "type": "string", "description": "Task title." },
        "placeId": { "type": "string", "description": "Store/place ID." },
        "dueDate": { "type": "string", "description": "ISO 8601 due date." },
        "instructions": { "type": "string", "description": "Instructions for the field rep." }
      },
      "required": ["title", "placeId"]
    }
  }
]

Idempotency keys

Idempotency is critical for AI agents, which may retry requests due to timeouts, network errors, or reasoning loops. Always include an idempotency key with analysis requests:
curl -X POST https://shelfforce.ai/api/v1/analyses \
  -H "Authorization: Bearer sf_live_a1b2c3d4..." \
  -H "Content-Type: application/json" \
  -H "Idempotency-Key: agent-run-2026-02-23-store42-aisle3" \
  -d '{
    "imageUrl": "https://example.com/shelf.jpg"
  }'
If the same idempotency key is sent again within 24 hours, Shelfforce returns the existing analysis rather than creating a duplicate. This prevents wasted credits and duplicate data.
Without idempotency keys, agent retries will create duplicate analyses and consume additional credits. Always include them in automated pipelines.

Webhook-driven vs. polling

There are two patterns for handling asynchronous analysis results:

Polling (simpler, good for agents)

Agent → POST /analyses → get ID → loop GET /analyses/{id} until completed → process results
Polling is straightforward and works well for agents that maintain a single execution thread. Poll every 3-5 seconds. Analyses typically complete in 10-30 seconds.

Webhook-driven (more efficient, better for production)

Agent → POST /analyses → return to other work → webhook fires → agent resumes with results
Webhooks are more efficient for production systems that process many images. Register a webhook for analysis.completed, and your system receives the results as soon as they are ready — no wasted requests.
For agents that need to wait for results inline (e.g., in a tool call), polling is simpler. For background pipelines or multi-step workflows, webhooks are recommended.

Error handling for agents

AI agents should handle Shelfforce errors gracefully:
ErrorAgent behavior
AUTH_REQUIRED / INVALID_API_KEYStop and surface the error. Do not retry — the key needs to be fixed.
INSUFFICIENT_CREDITSStop and notify the user that credits need to be purchased.
VALIDATION_FAILEDFix the request parameters. Check details for the specific field that failed.
RATE_LIMITEDWait for Retry-After seconds, then retry. Implement exponential backoff.
INTERNAL_ERRORRetry up to 3 times with exponential backoff. If it persists, surface the error.
NOT_FOUNDThe resource ID is wrong. Do not retry — verify the ID.

Example: Agent error handling loop

async function analyzeWithRetry(imageUrl: string, maxRetries = 3): Promise<Analysis> {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    const response = await fetch("https://shelfforce.ai/api/v1/analyses", {
      method: "POST",
      headers: {
        "Authorization": `Bearer ${apiKey}`,
        "Content-Type": "application/json",
        "Idempotency-Key": `${imageUrl}-${Date.now()}`,
      },
      body: JSON.stringify({ imageUrl }),
    });

    if (response.ok) {
      return response.json();
    }

    const { error } = await response.json();

    // Non-retryable errors
    if (["AUTH_REQUIRED", "INVALID_API_KEY", "INSUFFICIENT_CREDITS",
         "FORBIDDEN", "VALIDATION_FAILED", "NOT_FOUND"].includes(error.code)) {
      throw new Error(`Non-retryable error: ${error.code}${error.message}`);
    }

    // Rate limited — wait for specified duration
    if (error.code === "RATE_LIMITED") {
      const retryAfter = parseInt(
        response.headers.get("Retry-After") || "10", 10
      );
      await sleep(retryAfter * 1000);
      continue;
    }

    // Server error — exponential backoff
    if (attempt < maxRetries) {
      await sleep(1000 * Math.pow(2, attempt));
      continue;
    }
  }

  throw new Error("Max retries exceeded");
}

MCP server (coming soon)

A Model Context Protocol (MCP) server for Shelfforce is on the roadmap. This will allow Claude Desktop, Cursor, and other MCP-compatible clients to discover and use Shelfforce tools automatically.

Best practices for agents

Always use idempotency keys

Prevents duplicate analyses and wasted credits when agents retry requests.

Include metadata

Tag analyses with source, agent name, and context so you can trace which agent submitted which analysis.

Handle all error codes

Distinguish between retryable (429, 500) and non-retryable (401, 402, 403, 422) errors.

Cache analysis results

Analysis results are permanent. Store results locally to avoid redundant GET requests.

Use llms.txt for context

Include the llms.txt content in your agent’s system prompt for better API understanding.

Respect rate limits

Implement exponential backoff. Use batch endpoints to reduce request count.