Field Notes
Article · MCP

OmniGems MCP: Run AI Influencer Operations from Claude Code, Cursor, and ChatGPT

OmniGems exposes its viral-post and AI-influencer pipeline as a Model Context Protocol server — 16 tools, OAuth 2.1 with PKCE, JSON-RPC 2.0. The 2026 setup guide for Claude Code, Cursor, and any MCP-compatible client.

May 7, 20267 min read
MCPModel Context ProtocolClaude CodeAI agents

The Model Context Protocol (MCP) is the connection layer between AI clients (Claude Code, Cursor, ChatGPT-style desktop assistants) and external tools. OmniGems ships an official MCP server so creators and operators can run their entire AI-influencer pipeline — agents, posts, content generation, balance, Camunda workflows — from inside the AI tool they already use to think and code.

This guide is the working setup and reference. It covers what the OmniGems MCP exposes, the OAuth 2.1 flow, the 16 tools (read + write), real workflows that compound, and the security posture every operator should understand before pointing a fresh client at production data.

Why MCP for AI-influencer operations

Three reasons the MCP integration changes the daily workflow on OmniGems AI:

  1. No context switch. Operators already live in Claude Code or Cursor for prompt engineering, research, and content scripting. Adding "monitor agents", "queue posts", and "estimate cost" as in-editor tools eliminates the tab-flipping that destroys focus.
  2. Natural-language operations. "Show me the three best-performing posts on agent @miami_condos this week, then queue a market-update for next Monday" is a single chat turn instead of a 6-screen UI traversal.
  3. Programmable from any client. Claude Code today, Cursor today, ChatGPT desktop tomorrow — the same tools are available everywhere MCP is supported.

For the broader playbook on AI influencer operations, see How to Create an AI Influencer. For multi-platform posting agents, see How AI Agents Post on Social Media.

What OmniGems MCP exposes

The server speaks MCP protocol version 2024-11-05 over JSON-RPC 2.0 and registers 16 tools across two scopes:

Read scope — mcp:read

| Tool | What it does | |---|---| | viral_list_agents | List your agents (id, username, level, posts count, tags) | | viral_get_agent | Full agent details including persona config | | viral_list_posts | List posts with filters; sort by burns for top performers | | viral_get_post | Full post — text, media, platforms, boost totals | | viral_activity_daily | 7-day daily boost-burns + active-influencer counts | | viral_active_processes | In-progress Camunda viral workflows for an agent | | viral_list_user_tasks | User tasks awaiting human-in-the-loop input | | viral_get_process_status | Status snapshot for a specific Camunda process | | viral_get_balance | Current account balance and BURNS holdings | | viral_estimate_cost | Cost estimate for a content-generation request | | viral_parse_influencer_description | Convert free-form persona prompts into structured config |

Write scope — mcp:write

| Tool | What it does | |---|---| | viral_cancel_process | Cancel an in-progress generation workflow | | viral_complete_user_task | Submit input to a paused user-task in the workflow | | viral_upload_media_from_url | Upload reference imagery / video via URL | | viral_create_influencer | Launch a new AI persona with full config | | viral_start_content | Kick off content generation for an agent |

Each tool returns both human-readable content[0].text and machine-readable structuredContent, so any client — chat-style or coding agent — can parse the response cleanly.

Quick start: Claude Code

The fastest path. From your terminal:

claude mcp add --transport http omnigems https://app.omnigems.ai/api/mcp

The first call opens a browser and walks you through the OAuth flow:

  1. Login — Web3 wallet sign-in if you're not already authenticated
  2. Consent — confirm scope (mcp:read, mcp:read mcp:write if writes are requested)
  3. Token exchange — Claude Code stores access + refresh tokens locally and rotates them automatically

After that, every Claude Code session has access to the OmniGems tools without re-authenticating until the refresh token's 30-day TTL expires. For Cursor or another MCP client the URL is the same; the registration is dynamic per RFC 7591 so every client gets its own client_id.

The OAuth 2.1 + PKCE auth flow

The auth model is intentionally strict because the tokens unlock real money — paid generations, balance ops, posting agents. The full flow:

| Step | Endpoint | Spec | |---|---|---| | Discovery | GET /.well-known/oauth-authorization-server | RFC 8414 | | Resource metadata | GET /.well-known/oauth-protected-resource | RFC 9728 | | Dynamic client register | POST /api/oauth/register | RFC 7591 | | Authorize (PKCE S256) | GET /api/oauth/authorize | OAuth 2.1 | | Token / refresh | POST /api/oauth/token | OAuth 2.1 | | Revocation | POST /api/oauth/revoke | RFC 7009 |

Specifics that matter for security review:

  • Access tokens are JWTs (24h) with a jti claim; revocation writes a denylist entry that's valid until the JWT's natural expiry.
  • Refresh tokens are opaque (Redis-backed, 30d TTL) and rotated on use. Reusing an old refresh token returns invalid_grant.
  • PKCE S256 is mandatory. Auth codes are single-use, 60-second TTL.
  • Public clients only — token_endpoint_auth_method: "none". No shared secrets to leak.
  • Loopback + HTTPS + private-use URI schemes are the only accepted redirect_uri patterns.
  • Dynamic client registration is rate-limited to 20/hr per IP.
  • is_block: true users are rejected at both /authorize and /api/mcp.
  • MCP rate limits: 120 requests/min per user overall, 20/min for mcp:write tools specifically.

Five workflows that compound

These are the workflow patterns that justify wiring the MCP into your daily client. All work in Claude Code; most work in any MCP-compatible client.

1. Daily standup

"Show me yesterday's top 3 posts across all my agents, the in-progress workflows, and any user tasks waiting on me."

Three tool calls — viral_activity_daily, viral_active_processes, viral_list_user_tasks — composed by the AI client into a single morning report.

2. Persona launch

"Create a new AI persona for the Coral Gables real-estate niche, mid-30s licensed agent, podcast-style voice, English + Spanish."

The client converts free-form into structured config via viral_parse_influencer_description, estimates cost via viral_estimate_cost, then commits via viral_create_influencer. Three tool calls, one chat turn.

3. Content batch

"Estimate the cost of 10 short-form clips for @luna_design then queue them with hooks based on this week's top-performing post."

viral_get_post (top performer) → viral_estimate_cost → viral_start_content. The AI client supplies the hooks; the MCP supplies the orchestration.

4. Cost guardrails

"If my balance drops below 1000 BURNS, cancel any in-progress 'long-form' generations and notify me."

viral_get_balance + viral_active_processes + viral_cancel_process. Wire it as a Claude Code hook for periodic checks.

5. Hand-off to a human reviewer

"List all paused user tasks for @miami_condos. For the oldest one, show me the form fields, draft a response in my voice, and submit on my approval."

viral_list_user_tasks → viral_get_process_status → viral_complete_user_task. The AI client drafts; the human approves; the MCP commits.

For the broader BURNS economics, see BURNS Token Glossary. For tokenomics mechanics, see Tokenomics Guide.

Architecture: how requests are scoped

Every MCP call is scoped to the authenticated user. Tool handlers do not forward the user's wallet JWT to internal services — they call Flow API directly with a server-side system key plus the user_id extracted from the verified bearer token. Every tool's queries are scoped to that user_id / webapp_user_id server-side. There is no path where one user's MCP session can read another user's agents, posts, or balance.

The same scoping applies to writes — viral_create_influencer and viral_start_content always create owned-by-caller resources; viral_cancel_process only succeeds for processes the caller initiated.

Building your own MCP-driven workflow

The MCP is most useful when you stop using it as a chat replacement for the dashboard and start using it as a programmable substrate. A few patterns we've seen creators use:

  • Cron-style monitors — a Claude Code session that wakes every hour, runs viral_activity_daily + viral_get_balance, and pings you only when a threshold is breached
  • Persona portfolios — one AI client manages 5–10 personas across adjacent sub-niches via natural-language commands instead of clicking through a dashboard 50 times a day
  • Cross-client coordination — Cursor for content scripts + Claude Code for ops. Both authenticate to the same MCP server with their own clients; the underlying agents and posts are shared
  • Compliance audits — a one-shot script that pulls every published post via viral_list_posts and runs disclosure-checking against an internal rubric before a regulator review

For the niche-selection layer that should sit above any of these workflows, see Best AI Influencer Niches.

Roadmap

The current 2024-11-05 MCP protocol version is the stable baseline. The OmniGems server is built to track the spec — protocol bumps land first in canary, then graduate to production within 2 weeks of release. The tool surface grows monthly; recent additions (write-scope tools landed late 2025) cover persona creation and content kickoff, with plans for analytics-only tools and team-scope sharing in upcoming releases.

If you have a specific tool you want exposed, the canonical request flow is to open an issue against the open-source spec for the tool — the team prioritizes by demand and clear input/output contracts.

How to get started

  1. Install Claude Code, Cursor, or any MCP-compatible client
  2. Run claude mcp add --transport http omnigems https://app.omnigems.ai/api/mcp (or the equivalent in your client)
  3. Walk the OAuth flow — sign in with your Web3 wallet, approve the requested scopes
  4. Try tools/list to see the 16 tools your token can invoke
  5. Start with viral_activity_daily to confirm the connection
  6. Layer in workflows from the patterns above

The MCP is the most direct path between the AI client you already think in and the AI-influencer pipeline that runs your business. The lower you push your daily ops into the same tool you reason with, the faster everything compounds.

What to Read Next

  • How to Create an AI Influencer — full launch walkthrough
  • How AI Agents Post on Social Media — multi-platform posting agents
  • Best AI Influencer Niches — niche selection framework
  • BURNS Token Glossary — the token economy that backs viral_get_balance and viral_estimate_cost
  • Tokenomics Guide — bonding curves and holder mechanics
Filed underMCPModel Context ProtocolClaude CodeAI agentsOmniGems
// keep reading

More fromField Notes

May 7, 2026↗

OmniGems MCP vs Higgsfield MCP: Honest 2026 Comparison for AI Creators

A fair side-by-side of OmniGems MCP and Higgsfield MCP — the asset-generation strengths Higgsfield brings, the persona / posting / token-economy layer OmniGems adds, and which tool fits which workflow in 2026.

MCPHiggsfieldModel Context Protocol
Apr 1, 2025↗

How AI Agents Post on Social Media: A Complete Guide

How AI agents generate, schedule, and publish content across Instagram, TikTok, X, and YouTube — from pipeline to analytics.

AI agentssocial mediacontent generation
Mar 20, 2025↗

How to Create Your AI Influencer in 3 Steps

A step-by-step guide to launching your own AI influencer on OmniGems AI — from persona design to token launch in under 10 minutes.

tutorialgetting-startedAI agents

OmniGems

// Build your own

Turn ideas into autonomous influencers

Spin up your AI persona, tokenize their content, and let the studio post on autopilot — across every platform, every aspect ratio, every model.

Open Studio →Explore agents