Skip to main content

Website Resources

Website resources allow you to save a URL and configure how its content is delivered when accessed by AI models. You can choose to provide just the URL, scrape the website for markdown content, or generate AI summaries - either live on each read or cached at creation time.

Creating a Website Resource

  1. Navigate to Resources in the left sidebar
  2. Click + New Resource
  3. Select Website as the resource type
  4. Fill in the resource details:
    • Name: A descriptive name for the resource (e.g., company_docs)
    • Description: Brief description of what this website contains
    • Website URL: The full URL of the website (e.g., https://docs.example.com)
    • Read Behavior: Choose how content is delivered (see below)

Processing Status

When you create a website resource with Cached Markdown or Cached Summary, the resource is created immediately while content scraping happens in the background. You'll see:

  • Processing: A loading indicator while the website is being scraped
  • Ready: Content is available and displayed on the details page
  • Error: If scraping fails, the resource is automatically downgraded to URL Only mode

This allows you to continue working while large websites are being processed. The page will automatically update when processing completes.

Read Behavior Options

URL Only

Returns just the URL as the resource content. No scraping or processing occurs.

  • Cost: Free
  • Use case: When you just need to reference a URL

Live Markdown

Scrapes the website and converts it to markdown each time the resource is read.

  • Cost: 1 credit per read (using platform scraping)
  • Use case: Websites that update frequently where you need current content

Cached Markdown

Scrapes the website once at creation and serves that cached markdown on each read.

  • Cost: 1 credit at creation (using platform scraping)
  • Use case: Static documentation or content that doesn't change often

Live Summary

Scrapes the website and generates an AI summary each time the resource is read.

  • Cost: 1 credit + LLM tokens per read (using platform)
  • Use case: When you need current, concise summaries of frequently updated content

Cached Summary

Generates an AI summary once at creation and serves that cached summary on each read.

  • Cost: 1 credit + LLM tokens at creation (using platform)
  • Use case: When you need a concise summary that doesn't need frequent updates

Credit Costs

Read BehaviorCreation CostPer-Read Cost
URL Only00
Live Markdown01 credit
Cached Markdown1 credit0
Live Summary01 credit + LLM tokens
Cached Summary1 credit + LLM tokens0

Using Your Own Accounts

You can avoid platform credit charges by providing your own service accounts:

Firecrawl Account

Provide your own Firecrawl API key to avoid scrape credit charges. This works for all scraping operations (Live Markdown, Cached Markdown, Live Summary, Cached Summary).

To use your own Firecrawl account:

  1. Go to Accounts in the sidebar
  2. Add a new Firecrawl account with your API key
  3. When creating a website resource, select your Firecrawl account

LLM Provider Account

Provide your own OpenAI, Anthropic, or Google account to avoid LLM credit charges for summaries.

To use your own LLM provider:

  1. Go to Accounts in the sidebar
  2. Add an account for OpenAI, Anthropic, or Google
  3. When creating a website resource with summary options, select your LLM provider account

Viewing Content

Once a website resource is created, you can view its content directly on the resource details page:

Content Display

  • Cached resources (Cached Markdown, Cached Summary): Content loads automatically
  • Live resources (Live Markdown, Live Summary): Click Fetch Content Now to load (charges credits)
  • URL Only: Displays the URL as a clickable link

Raw vs Rendered View

Toggle between two viewing modes:

  • Rendered: Shows formatted markdown with headers, lists, and styling
  • Raw: Shows the plain text/markdown source

This is useful for debugging or when you need to see the exact content being provided to AI models.

Refreshing Cached Content

For Cached Markdown and Cached Summary resources, you can manually refresh the content:

  1. Open the website resource detail page
  2. Click the Refresh Content button
  3. The website will be re-scraped (and re-summarized if applicable)
  4. Credits will be charged for the refresh operation

This is useful when the source website has been updated and you need to capture the new content.

Example Use Cases

Documentation Reference

Cache your product documentation for AI context:

Name: Product API Docs
URL: https://docs.myproduct.com/api
Read Behavior: Cached Markdown

News Monitoring

Get live summaries of news pages:

Name: Tech News Summary
URL: https://news.example.com/tech
Read Behavior: Live Summary
LLM Model: Gemini 2.5 Flash

Competitor Analysis

Keep a cached summary of competitor pages:

Name: Competitor Features
URL: https://competitor.com/features
Read Behavior: Cached Summary
LLM Model: Claude Sonnet 4.5

Just store URLs for quick access:

Name: Support Portal
URL: https://support.mycompany.com
Read Behavior: URL Only

Linking to Servers

After creating a website resource:

  1. Open the resource detail page
  2. In the Linked Servers section, select servers to link
  3. The resource content is now accessible to AI clients connected to those servers

Best Practices

  • Choose the Right Behavior: Use cached options for static content, live for dynamic
  • Consider Credit Usage: Live options charge on every read
  • Use Your Own Accounts: Bring your own Firecrawl/LLM accounts to reduce costs
  • Refresh When Needed: Manually refresh cached content when source updates
  • Pick Appropriate Models: Lighter models work well for simple summaries
  • Test URLs First: Ensure the website is accessible and scrapable before creating the resource

Error Handling

If website scraping fails during resource creation (e.g., website is unreachable, blocked, or times out):

  • The resource is automatically downgraded to URL Only mode
  • An error message is displayed explaining what went wrong
  • You can still use the resource - it will return just the URL
  • To retry scraping, you can delete the resource and create a new one

Limitations

  • Some websites may block scraping or require authentication
  • Very large pages may be truncated for summarization
  • JavaScript-heavy sites may not scrape completely with basic scraping
  • Live options may be slower due to real-time scraping
  • If processing fails, resources are downgraded to URL Only mode