$ ~/ym8 --define llm-profile-json
llm-profile.json
definition
llm-profile.json is the structured data complement to llms.txt, providing a machine-readable JSON-LD representation of your brand's identity, offerings, and expertise. While llms.txt is a plain-text file designed for human readability and simple AI parsing, llm-profile.json follows the JSON-LD format and Schema.org vocabulary, enabling precise, programmatic interpretation by AI systems.
The file is placed at the .well-known/llm-profile.json path, following the .well-known URI convention used for other machine-readable configurations (like security.txt, webfinger, etc.). This standardised location makes it easy for AI crawlers to discover and process.
A well-structured llm-profile.json includes: brand identity (name, type, description), core offerings (products, services), expertise areas and qualifications, differentiators, advisory roles, links to key pages, and preferred citation formats. The preferred citation formats section is particularly valuable—it tells AI engines exactly how you want your brand to be described and attributed.
llm-profile.json works synergistically with other AI indexing files. Together with llms.txt (plain-text overview), .well-known/ai.txt (crawler directives), and on-page Schema.org markup, it forms a comprehensive AI-readable layer that gives brands direct influence over how AI engines perceive and represent them.
why_it_matters
llm-profile.json provides the most structured and precise way to communicate your brand identity to AI engines. While llms.txt offers a text-based overview, llm-profile.json provides machine-readable data that AI systems can programmatically parse, reducing the risk of misinterpretation and ensuring consistent brand representation across AI-generated responses.
examples
- A SaaS company's llm-profile.json specifying their product category, key features, and preferred citation format
- A consulting firm's llm-profile.json listing expertise areas, advisory roles, and preferred summary language
- An ecommerce brand's llm-profile.json providing product categories, brand values, and differentiation points
faq
Where should llm-profile.json be placed?
Place llm-profile.json at .well-known/llm-profile.json on your domain (e.g., yourdomain.com/.well-known/llm-profile.json). This follows the .well-known URI convention and is the expected location for AI crawlers.
What format should llm-profile.json follow?
Use JSON-LD format with Schema.org vocabulary. The primary @type is typically "Person" or "Organization". Include fields for name, description, offerings, expertise, differentiators, links, and preferred citation formats. Validate the JSON syntax and Schema.org compliance before deploying.
Related Terms
llms.txt
llms.txt is a plain-text file placed at a website's root that provides structured, machine-readable information about a brand, product, or organisation specifically for consumption by large language models. It functions as a "robots.txt for AI" — telling AI crawlers what your brand is and how it should be described.
Technical AEO
Technical AEO encompasses the infrastructure and technical configurations that help AI engines discover, crawl, parse, and cite your content. It includes AI-specific crawl policies, structured data implementation, llms.txt files, site architecture optimisation, and content formatting for AI consumption.
Structured Data for AI
Structured Data for AI refers to the use of schema markup (JSON-LD, microdata) and AI-specific files (llms.txt, llm-profile.json) to provide machine-readable context about your content, products, and brand to both search engines and AI engines.
AI Crawlers
AI Crawlers are automated bots operated by AI companies that scan websites to collect content for training data and real-time retrieval. Major AI crawlers include GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot (Perplexity), Google-Extended (Google), and Bingbot (Microsoft).
Related Engines
Monitor Your AI Visibility
See how your brand appears with the default core pair. Start with ChatGPT and Claude by default. Expand monitoring only when the workflow needs it.