firecrawl/README.md

459 lines
14 KiB
Markdown
Raw Permalink Normal View History

2024-04-15 21:01:47 +00:00
# 🔥 Firecrawl
2024-06-04 19:18:57 +00:00
Crawl and convert any website into LLM-ready markdown or structured data. Built by [Mendable.ai](https://mendable.ai?ref=gfirecrawl) and the Firecrawl community. Includes powerful scraping, crawling and data extraction capabilities.
2024-04-15 21:01:47 +00:00
2024-05-04 20:26:18 +00:00
_This repository is in its early development stages. We are still merging custom modules in the mono repo. It's not completely yet ready for full self-host deployment, but you can already run it locally._
2024-04-15 21:01:47 +00:00
## What is Firecrawl?
2024-08-28 16:03:07 +00:00
[Firecrawl](https://firecrawl.dev?ref=github) is an API service that takes a URL, crawls it, and converts it into clean markdown or structured data. We crawl all accessible subpages and give you clean data for each. No sitemap required. Check out our [documentation](https://docs.firecrawl.dev).
2024-04-15 21:01:47 +00:00
_Pst. hey, you, join our stargazers :)_
<img src="https://github.com/mendableai/firecrawl/assets/44934913/53c4483a-0f0e-40c6-bd84-153a07f94d29" width="200">
2024-04-15 21:01:47 +00:00
## How to use it?
2024-04-24 17:16:23 +00:00
We provide an easy to use API with our hosted version. You can find the playground and documentation [here](https://firecrawl.dev/playground). You can also self host the backend if you'd like.
2024-04-15 21:01:47 +00:00
2024-04-17 04:19:16 +00:00
- [x] [API](https://firecrawl.dev/playground)
2024-04-15 21:45:08 +00:00
- [x] [Python SDK](https://github.com/mendableai/firecrawl/tree/main/apps/python-sdk)
2024-04-24 17:16:23 +00:00
- [x] [Node SDK](https://github.com/mendableai/firecrawl/tree/main/apps/js-sdk)
2024-04-15 21:45:08 +00:00
- [x] [Langchain Integration 🦜🔗](https://python.langchain.com/docs/integrations/document_loaders/firecrawl/)
2024-07-08 17:37:53 +00:00
- [x] [Langchain JS Integration 🦜🔗](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/firecrawl)
2024-07-08 21:02:20 +00:00
- [x] [Llama Index Integration 🦙](https://docs.llamaindex.ai/en/latest/examples/data_connectors/WebPageDemo/#using-firecrawl-reader)
- [x] [Dify Integration](https://dify.ai/blog/dify-ai-blog-integrated-with-firecrawl)
- [x] [Langflow Integration](https://docs.langflow.org/)
- [x] [Crew.ai Integration](https://docs.crewai.com/)
- [x] [Flowise AI Integration](https://docs.flowiseai.com/integrations/langchain/document-loaders/firecrawl)
- [x] [PraisonAI Integration](https://docs.praison.ai/firecrawl/)
2024-07-08 21:09:17 +00:00
- [x] [Zapier Integration](https://zapier.com/apps/firecrawl/integrations)
2024-04-24 23:13:29 +00:00
- [ ] Want an SDK or Integration? Let us know by opening an issue.
2024-04-15 21:45:33 +00:00
2024-04-21 18:41:34 +00:00
To run locally, refer to guide [here](https://github.com/mendableai/firecrawl/blob/main/CONTRIBUTING.md).
2024-04-15 21:01:47 +00:00
### API Key
2024-04-17 04:19:16 +00:00
To use the API, you need to sign up on [Firecrawl](https://firecrawl.dev) and get an API key.
2024-04-24 17:16:23 +00:00
2024-04-15 21:01:47 +00:00
### Crawling
Used to crawl a URL and all accessible subpages. This submits a crawl job and returns a job ID to check the status of the crawl.
```bash
2024-08-28 16:03:07 +00:00
curl -X POST https://api.firecrawl.dev/v1/crawl \
2024-04-15 21:01:47 +00:00
-H 'Content-Type: application/json' \
2024-08-28 16:03:07 +00:00
-H 'Authorization: Bearer fc-YOUR_API_KEY' \
2024-04-15 21:01:47 +00:00
-d '{
2024-08-28 16:03:07 +00:00
"url": "https://docs.firecrawl.dev",
"limit": 100,
"scrapeOptions": {
"formats": ["markdown", "html"]
}
2024-04-15 21:01:47 +00:00
}'
```
2024-08-28 16:03:07 +00:00
Returns a crawl job id and the url to check the status of the crawl.
2024-04-15 21:01:47 +00:00
```json
2024-08-28 16:03:07 +00:00
{
"success": true,
"id": "123-456-789",
"url": "https://api.firecrawl.dev/v1/crawl/123-456-789"
}
2024-04-15 21:01:47 +00:00
```
### Check Crawl Job
Used to check the status of a crawl job and get its result.
```bash
2024-08-28 16:03:07 +00:00
curl -X GET https://api.firecrawl.dev/v1/crawl/123-456-789 \
2024-04-15 21:01:47 +00:00
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY'
```
```json
{
2024-04-24 17:16:23 +00:00
"status": "completed",
"total": 36,
2024-08-28 16:03:07 +00:00
"creditsUsed": 36,
"expiresAt": "2024-00-00T00:00:00.000Z",
2024-04-24 17:16:23 +00:00
"data": [
{
2024-08-28 16:03:07 +00:00
"markdown": "[Firecrawl Docs home page![light logo](https://mintlify.s3-us-west-1.amazonaws.com/firecrawl/logo/light.svg)!...",
"html": "<!DOCTYPE html><html lang=\"en\" class=\"js-focus-visible lg:[--scroll-mt:9.5rem]\" data-js-focus-visible=\"\">...",
2024-04-24 17:16:23 +00:00
"metadata": {
2024-08-28 16:03:07 +00:00
"title": "Build a 'Chat with website' using Groq Llama 3 | Firecrawl",
"language": "en",
"sourceURL": "https://docs.firecrawl.dev/learn/rag-llama3",
"description": "Learn how to use Firecrawl, Groq Llama 3, and Langchain to build a 'Chat with your website' bot.",
"ogLocaleAlternate": [],
"statusCode": 200
2024-04-24 17:16:23 +00:00
}
}
]
}
```
### Scraping
2024-08-28 16:03:07 +00:00
Used to scrape a URL and get its content in the specified formats.
2024-04-24 17:16:23 +00:00
```bash
2024-08-28 16:03:07 +00:00
curl -X POST https://api.firecrawl.dev/v1/scrape \
2024-04-24 17:16:23 +00:00
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
2024-08-28 16:03:07 +00:00
"url": "https://docs.firecrawl.dev",
"formats" : ["markdown", "html"]
2024-04-24 17:16:23 +00:00
}'
```
Response:
```json
{
"success": true,
"data": {
2024-08-28 16:03:07 +00:00
"markdown": "Launch Week I is here! [See our Day 2 Release 🚀](https://www.firecrawl.dev/blog/launch-week-i-day-2-doubled-rate-limits)[💥 Get 2 months free...",
"html": "<!DOCTYPE html><html lang=\"en\" class=\"light\" style=\"color-scheme: light;\"><body class=\"__variable_36bd41 __variable_d7dc5d font-inter ...",
2024-04-24 17:16:23 +00:00
"metadata": {
2024-08-28 16:03:07 +00:00
"title": "Home - Firecrawl",
"description": "Firecrawl crawls and converts any website into clean markdown.",
"language": "en",
"keywords": "Firecrawl,Markdown,Data,Mendable,Langchain",
"robots": "follow, index",
"ogTitle": "Firecrawl",
"ogDescription": "Turn any website into LLM-ready data.",
"ogUrl": "https://www.firecrawl.dev/",
"ogImage": "https://www.firecrawl.dev/og.png?123",
"ogLocaleAlternate": [],
"ogSiteName": "Firecrawl",
"sourceURL": "https://firecrawl.dev",
"statusCode": 200
2024-04-24 17:16:23 +00:00
}
}
}
```
2024-08-28 16:03:07 +00:00
### Map (Alpha)
2024-04-24 17:16:23 +00:00
2024-08-28 16:03:07 +00:00
Used to map a URL and get urls of the website. This returns most links present on the website.
2024-04-24 17:16:23 +00:00
2024-08-28 16:03:07 +00:00
```bash cURL
curl -X POST https://api.firecrawl.dev/v1/map \
2024-04-24 17:16:23 +00:00
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
2024-08-28 16:03:07 +00:00
"url": "https://firecrawl.dev"
2024-04-24 17:16:23 +00:00
}'
```
2024-08-28 16:03:07 +00:00
Response:
2024-04-24 17:16:23 +00:00
```json
{
2024-08-28 16:03:07 +00:00
"status": "success",
"links": [
"https://firecrawl.dev",
"https://www.firecrawl.dev/pricing",
"https://www.firecrawl.dev/blog",
"https://www.firecrawl.dev/playground",
"https://www.firecrawl.dev/smart-crawl",
]
}
```
#### Map with search
Map with `search` param allows you to search for specific urls inside a website.
```bash cURL
curl -X POST https://api.firecrawl.dev/v1/map \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
"url": "https://firecrawl.dev",
"search": "docs"
}'
```
Response will be an ordered list from the most relevant to the least relevant.
```json
{
"status": "success",
"links": [
"https://docs.firecrawl.dev",
"https://docs.firecrawl.dev/sdks/python",
"https://docs.firecrawl.dev/learn/rag-llama3",
2024-04-24 17:16:23 +00:00
]
2024-04-15 21:01:47 +00:00
}
```
2024-08-28 16:03:07 +00:00
### LLM Extraction (v0) (Beta)
2024-05-01 20:22:01 +00:00
Used to extract structured data from scraped pages.
```bash
curl -X POST https://api.firecrawl.dev/v0/scrape \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
"url": "https://www.mendable.ai/",
"extractorOptions": {
"mode": "llm-extraction",
"extractionPrompt": "Based on the information on the page, extract the information from the schema. ",
"extractionSchema": {
"type": "object",
"properties": {
2024-05-01 20:38:57 +00:00
"company_mission": {
"type": "string"
},
"supports_sso": {
"type": "boolean"
},
"is_open_source": {
"type": "boolean"
},
"is_in_yc": {
"type": "boolean"
}
2024-05-01 20:22:01 +00:00
},
"required": [
2024-05-01 20:38:57 +00:00
"company_mission",
"supports_sso",
"is_open_source",
"is_in_yc"
2024-05-01 20:22:01 +00:00
]
2024-05-01 20:38:57 +00:00
}
2024-05-01 20:22:01 +00:00
}
}'
```
```json
{
2024-07-08 17:37:53 +00:00
"success": true,
"data": {
"content": "Raw Content",
"metadata": {
"title": "Mendable",
"description": "Mendable allows you to easily build AI chat applications. Ingest, customize, then deploy with one line of code anywhere you want. Brought to you by SideGuide",
"robots": "follow, index",
"ogTitle": "Mendable",
"ogDescription": "Mendable allows you to easily build AI chat applications. Ingest, customize, then deploy with one line of code anywhere you want. Brought to you by SideGuide",
"ogUrl": "https://mendable.ai/",
"ogImage": "https://mendable.ai/mendable_new_og1.png",
"ogLocaleAlternate": [],
"ogSiteName": "Mendable",
"sourceURL": "https://mendable.ai/"
},
"llm_extraction": {
"company_mission": "Train a secure AI on your technical resources that answers customer and employee questions so your team doesn't have to",
"supports_sso": true,
"is_open_source": false,
"is_in_yc": true
2024-05-01 20:22:01 +00:00
}
2024-07-08 17:37:53 +00:00
}
2024-05-01 20:22:01 +00:00
}
```
2024-04-15 21:01:47 +00:00
2024-08-28 16:03:07 +00:00
### Search (v0) (Beta)
Used to search the web, get the most relevant results, scrape each page and return the markdown.
2024-04-15 21:01:47 +00:00
```bash
2024-08-28 16:03:07 +00:00
curl -X POST https://api.firecrawl.dev/v0/search \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
"query": "firecrawl",
"pageOptions": {
"fetchPageContent": true // false for a fast serp api
}
}'
2024-04-15 21:01:47 +00:00
```
2024-08-28 16:03:07 +00:00
```json
{
"success": true,
"data": [
{
"url": "https://mendable.ai",
"markdown": "# Markdown Content",
"provider": "web-scraper",
"metadata": {
"title": "Mendable | AI for CX and Sales",
"description": "AI for CX and Sales",
"language": null,
"sourceURL": "https://www.mendable.ai/"
}
}
]
}
```
2024-04-15 21:01:47 +00:00
2024-08-28 16:03:07 +00:00
## Using Python SDK
2024-04-15 21:01:47 +00:00
2024-08-28 16:03:07 +00:00
### Installing Python SDK
2024-04-15 21:01:47 +00:00
2024-08-28 16:03:07 +00:00
```bash
pip install firecrawl-py
2024-04-15 21:01:47 +00:00
```
2024-08-28 16:03:07 +00:00
### Crawl a website
2024-04-15 21:01:47 +00:00
```python
2024-08-28 16:03:07 +00:00
from firecrawl.firecrawl import FirecrawlApp
app = FirecrawlApp(api_key="fc-YOUR_API_KEY")
# Scrape a website:
scrape_status = app.scrape_url(
'https://firecrawl.dev',
params={'formats': ['markdown', 'html']}
)
print(scrape_status)
# Crawl a website:
crawl_status = app.crawl_url(
'https://firecrawl.dev',
params={
'limit': 100,
'scrapeOptions': {'formats': ['markdown', 'html']}
},
wait_until_done=True,
poll_interval=30
)
print(crawl_status)
2024-04-15 21:01:47 +00:00
```
2024-05-09 00:41:15 +00:00
### Extracting structured data from a URL
2024-06-23 14:27:48 +00:00
With LLM extraction, you can easily extract structured data from any URL. We support pydantic schemas to make it easier for you too. Here is how you to use it:
2024-05-09 00:41:15 +00:00
```python
2024-08-28 16:03:07 +00:00
from firecrawl.firecrawl import FirecrawlApp
app = FirecrawlApp(api_key="fc-YOUR_API_KEY", version="v0")
2024-05-09 00:41:15 +00:00
class ArticleSchema(BaseModel):
title: str
2024-07-08 17:37:53 +00:00
points: int
2024-05-09 00:41:15 +00:00
by: str
commentsURL: str
class TopArticlesSchema(BaseModel):
top: List[ArticleSchema] = Field(..., max_items=5, description="Top 5 stories")
data = app.scrape_url('https://news.ycombinator.com', {
'extractorOptions': {
'extractionSchema': TopArticlesSchema.model_json_schema(),
'mode': 'llm-extraction'
},
'pageOptions':{
'onlyMainContent': True
}
})
print(data["llm_extraction"])
```
2024-05-09 00:45:19 +00:00
## Using the Node SDK
### Installation
To install the Firecrawl Node SDK, you can use npm:
```bash
npm install @mendable/firecrawl-js
```
### Usage
1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `FirecrawlApp` class.
```js
2024-08-28 16:03:07 +00:00
import FirecrawlApp, { CrawlParams, CrawlStatusResponse } from '@mendable/firecrawl-js';
2024-05-09 00:45:19 +00:00
2024-08-28 16:03:07 +00:00
const app = new FirecrawlApp({apiKey: "fc-YOUR_API_KEY"});
2024-05-09 00:45:19 +00:00
2024-08-28 16:03:07 +00:00
// Scrape a website
const scrapeResponse = await app.scrapeUrl('https://firecrawl.dev', {
formats: ['markdown', 'html'],
});
2024-05-09 00:45:19 +00:00
2024-08-28 16:03:07 +00:00
if (scrapeResponse) {
console.log(scrapeResponse)
}
2024-05-09 00:45:19 +00:00
2024-08-28 16:03:07 +00:00
// Crawl a website
const crawlResponse = await app.crawlUrl('https://firecrawl.dev', {
limit: 100,
scrapeOptions: {
formats: ['markdown', 'html'],
}
} as CrawlParams, true, 30) as CrawlStatusResponse;
2024-05-09 00:45:19 +00:00
2024-08-28 16:03:07 +00:00
if (crawlResponse) {
console.log(crawlResponse)
}
2024-05-09 00:45:19 +00:00
```
2024-08-28 16:03:07 +00:00
2024-05-09 00:45:19 +00:00
### Extracting structured data from a URL
With LLM extraction, you can easily extract structured data from any URL. We support zod schema to make it easier for you too. Here is how you to use it:
2024-07-08 17:37:53 +00:00
2024-05-09 00:45:19 +00:00
```js
import FirecrawlApp from "@mendable/firecrawl-js";
import { z } from "zod";
const app = new FirecrawlApp({
apiKey: "fc-YOUR_API_KEY",
2024-08-28 16:03:07 +00:00
version: "v0"
2024-05-09 00:45:19 +00:00
});
// Define schema to extract contents into
const schema = z.object({
top: z
.array(
z.object({
title: z.string(),
points: z.number(),
by: z.string(),
commentsURL: z.string(),
})
)
.length(5)
.describe("Top 5 stories on Hacker News"),
});
2024-05-09 00:46:35 +00:00
2024-05-09 17:27:10 +00:00
const scrapeResult = await app.scrapeUrl("https://news.ycombinator.com", {
2024-05-09 00:45:19 +00:00
extractorOptions: { extractionSchema: schema },
});
2024-05-09 00:46:35 +00:00
2024-05-09 00:45:19 +00:00
console.log(scrapeResult.data["llm_extraction"]);
```
2024-04-15 21:01:47 +00:00
## Contributing
We love contributions! Please read our [contributing guide](CONTRIBUTING.md) before submitting a pull request.
2024-04-28 18:41:42 +00:00
2024-07-08 17:37:53 +00:00
_It is the sole responsibility of the end users to respect websites' policies when scraping, searching and crawling with Firecrawl. Users are advised to adhere to the applicable privacy policies and terms of use of the websites prior to initiating any scraping activities. By default, Firecrawl respects the directives specified in the websites' robots.txt files when crawling. By utilizing Firecrawl, you expressly agree to comply with these conditions._
## License Disclaimer
2024-07-24 18:56:53 +00:00
This project is primarily licensed under the GNU Affero General Public License v3.0 (AGPL-3.0), as specified in the LICENSE file in the root directory of this repository. However, certain components of this project are licensed under the MIT License. Refer to the LICENSE files in these specific directories for details.
2024-07-08 17:37:53 +00:00
Please note:
- The AGPL-3.0 license applies to all parts of the project unless otherwise specified.
2024-07-24 18:56:53 +00:00
- The SDKs and some UI components are licensed under the MIT License. Refer to the LICENSE files in these specific directories for details.
2024-07-08 17:37:53 +00:00
- When using or contributing to this project, ensure you comply with the appropriate license terms for the specific component you are working with.
For more details on the licensing of specific components, please refer to the LICENSE files in the respective directories or contact the project maintainers.