Générateur de plan de contenu optimisé pour le référencement avec SerpAPI et l'IA
Ceci est unContent Creation, Multimodal AIworkflow d'automatisation du domainecontenant 23 nœuds.Utilise principalement des nœuds comme Set, Html, Limit, Markdown, SplitOut. Créer des plans SEO à partir des résultats de recherche Google en utilisant SerpAPI et GPT-4o
- •Peut nécessiter les informations d'identification d'authentification de l'API cible
- •Clé API OpenAI
Catégorie
{
"id": "ASCZ9YZJvNkHsxOF",
"meta": {
"instanceId": "b597f3db206729fea79938d01afc341ae2811ad29d941b24c1597c0110c58f12"
},
"name": "SEO optimized outline generator using SerpAPI & AI",
"tags": [],
"nodes": [
{
"id": "bddced98-6feb-4327-a4a6-98e76ff0c6c3",
"name": "Créer le plan",
"type": "@n8n/n8n-nodes-langchain.openAi",
"position": [
1808,
-32
],
"parameters": {
"modelId": {
"__rl": true,
"mode": "list",
"value": "gpt-4o",
"cachedResultName": "GPT-4O"
},
"options": {
"temperature": 0.7
},
"messages": {
"values": [
{
"content": "=Please create an outline for an article with the keyword: \"{{ $('Extract URLs from JSON').item.json.Keyword }}\".\n\nBelow are the 3 article you should base the outline on:\n\n===ARTICLE 1===\n{{ ($('Put all articles into one item').item.json.markdown[0] == null) ? \"\" : $('Put all articles into one item').item.json.markdown[0]}}\n===END ARTICLE 1===\n===ARTICLE 2===\n{{ ($('Put all articles into one item').item.json.markdown[1] == null) ? \"\" : $('Put all articles into one item').item.json.markdown[1]}}\n===END ARTICLE 2===\n===ARTICLE 3===\n{{ ($('Put all articles into one item').item.json.markdown[2] == null) ? \"\" : $('Put all articles into one item').item.json.markdown[2]}}\n===END ARTICLE 3"
},
{
"role": "system",
"content": "=You are an SEO Content Strategist AI. \nYour task is to analyze three provided articles and a keyword, and generate a **MECE-structured (mutually exclusive, collectively exhaustive), SEO-optimized outline** for a new article. \n\nFollow these rules precisely:\n\n---\n\n### INPUT:\n- Three full-length articles on a similar topic.\n- One keyword (the target topic for SEO).\n\n---\n\n### TASKS:\n\n1. **Analyze the Articles:**\n - Extract all factual information.\n - Identify recurring themes, structures, headings, and subtopics across all three.\n - Do **not** fabricate or replace facts with your own.\n - Do **not** summarize—focus on content structure and topic relationships.\n\n2. **Keyword Integration:**\n - Use the given keyword to guide topic relevance.\n - Ensure every section in the outline directly supports the keyword.\n\n3. **Create a MECE Outline:**\n - Organize topics into high-level sections and sub-sections that are:\n - **Mutually Exclusive**: No overlapping content or themes.\n - **Collectively Exhaustive**: Together, the sections should fully cover the topic.\n\n4. **Optimize for SEO:**\n - Use heading structures (H1, H2, H3).\n - Incorporate natural language variants and LSI keywords when visible from the articles.\n - Focus on user search intent as inferred from article content.\n\n---\n\n### OUTPUT FORMAT:\n\n# SEO Article Outline: [Insert Keyword]\n\n## Introduction\n- Brief overview of the topic\n- Importance of the keyword/topic\n\n## [Section 1 Title]\n- [Subtopic 1.1]\n- [Subtopic 1.2]\n\n## [Section 2 Title]\n- [Subtopic 2.1]\n- [Subtopic 2.2]\n\n...\n\n## Conclusion\n- Recap key sections\n- Reinforce keyword/topic importance\n\n---\n\n### IMPORTANT:\n- No hallucination. Stick strictly to the data in the 3 articles.\n- Do not introduce new facts or opinions.\n- Every section should clearly trace back to information present in the source material."
}
]
}
},
"credentials": {
"openAiApi": {
"id": "R5mcnwXQyAqWYNFn",
"name": "OpenAi account - SOAX"
}
},
"retryOnFail": true,
"typeVersion": 1.8
},
{
"id": "b334649f-b35b-420c-be22-4895232c385f",
"name": "Obtenir les résultats de recherche",
"type": "n8n-nodes-base.httpRequest",
"position": [
-1056,
-16
],
"parameters": {
"url": "https://serpapi.com/search",
"options": {},
"sendQuery": true,
"authentication": "predefinedCredentialType",
"queryParameters": {
"parameters": [
{
"name": "q",
"value": "={{ $json.Keyword }} -inurl:reddit.com -inurl:quora.com"
},
{
"name": "location",
"value": "United States"
}
]
},
"nodeCredentialType": "serpApi"
},
"credentials": {
"serpApi": {
"id": "YAoSR4cn13A9KovZ",
"name": "SerpAPI account"
}
},
"retryOnFail": true,
"typeVersion": 4.2
},
{
"id": "6476facf-8f7c-4ef3-af4f-eea3e6124def",
"name": "Extraire les URLs",
"type": "n8n-nodes-base.splitOut",
"position": [
-336,
-16
],
"parameters": {
"options": {},
"fieldToSplitOut": "URLs"
},
"typeVersion": 1
},
{
"id": "1f8fbe1d-2a1b-43f7-a2e0-efa684ea41dc",
"name": "Nettoyer le texte",
"type": "n8n-nodes-base.markdown",
"position": [
1136,
-32
],
"parameters": {
"html": "={{ $json.html }}",
"options": {},
"destinationKey": "markdown"
},
"typeVersion": 1
},
{
"id": "965b2a4f-bb84-4cfc-892a-a9782dbb522d",
"name": "Extraire le contenu",
"type": "n8n-nodes-base.httpRequest",
"onError": "continueErrorOutput",
"maxTries": 5,
"position": [
32,
-16
],
"parameters": {
"url": "={{ $json.URLs }}",
"options": {
"batching": {
"batch": {
"batchSize": 5
}
}
},
"sendHeaders": true,
"headerParameters": {
"parameters": [
{
"name": "User-Agent",
"value": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
},
{
"name": "Accept",
"value": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8"
},
{
"name": "Accept-Encoding",
"value": "gzip, deflate, br"
},
{
"name": "Accept-Language",
"value": "en-US,en;q=0.5"
}
]
}
},
"retryOnFail": true,
"typeVersion": 4.2
},
{
"id": "d301d0b4-04fa-432b-ac2b-46fdb852a58f",
"name": "Limite",
"type": "n8n-nodes-base.limit",
"position": [
400,
-32
],
"parameters": {
"maxItems": 3
},
"typeVersion": 1
},
{
"id": "bc5e7c7e-6d6f-440a-85e9-4a4f13b6c377",
"name": "Note autocollante",
"type": "n8n-nodes-base.stickyNote",
"position": [
-2000,
-368
],
"parameters": {
"color": 7,
"width": 416,
"height": 848,
"content": "## Overview\nUse this workflow to create SEO-friendly outlines from high-ranking Google articles. \n\nEnter a keyword, and the workflow scrapes the top results, scrapes the content, analyzes it with AI, and builds a MECE (mutually exclusive, collectively exhaustive) outline. It’s useful for content creators and SEO specialists who want relevant, well-structured content.\n\n## How it works\n- Accepts a keyword submitted through a form\n- Uses the SerpAPI to get top Google results for a chosen country\n- Collects the top five URLs. We use five because we expect some to fail at the scraping stage\n- Scrapes each URL separately\n- Uses the first three articles to fit the AI model’s context window\n- Extracts the main text from the page body\n- Converts HTML to Markdown for cleaner text\n- Combines the cleaned text into a single list for AI processing\n- Analyzes the content with an AI language model to find common topics and headings\n- Generates an SEO-focused outline based on the most frequent topics\n\n## Setup steps\n1. Sign up for a [SerpAPI account](https://serpapi.com/) (free tier available)\n2. Create an [OpenAI account](https://openai.com/api/) and get an API key\n3. Set up your credential within N8N\n4. Run the workflow and enter your keyword\n5. The workflow will generate an SEO-friendly outline for your content"
},
"typeVersion": 1
},
{
"id": "cac60180-c7cb-4c1b-82c8-65ef51c43dba",
"name": "À la soumission du formulaire",
"type": "n8n-nodes-base.formTrigger",
"position": [
-1440,
-16
],
"webhookId": "55fd7c6c-db02-47e1-9e8c-95fe24a5a5da",
"parameters": {
"options": {},
"formTitle": "SEO optimized outline generator",
"formFields": {
"values": [
{
"fieldLabel": "Keyword",
"placeholder": "Your keyword here",
"requiredField": true
}
]
},
"formDescription": "This workflow generates an SEO optimized outline based on what's already ranking"
},
"typeVersion": 2.3
},
{
"id": "c2f8b9ea-8401-4b15-90c7-c3cff105b028",
"name": "Note autocollante 1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1552,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Form trigger\nAdd your keyword to the form and watch the magic happen :zap:"
},
"typeVersion": 1
},
{
"id": "5524c54e-6928-4683-8b96-245830a8161d",
"name": "Note autocollante 2",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1184,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Scrape Google using SerpAPI\nIn this Node we're scraping Google and excluding forums and UGC sites like Quora. If you have a big site you might want to exclude your own domain as well by adding `-site:yourdomain.com`"
},
"typeVersion": 1
},
{
"id": "22af269c-6324-4df9-80b0-522f719f1f8a",
"name": "Note autocollante 3",
"type": "n8n-nodes-base.stickyNote",
"position": [
-816,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Extract URLs from JSON\nSerpAPI gives us a lot of info we don't need. In this node we just clean up the results and only extract the URLs we'll use"
},
"typeVersion": 1
},
{
"id": "bdf80a10-0dd4-42ba-965e-00cf94928106",
"name": "Extraire les URLs de JSON",
"type": "n8n-nodes-base.set",
"position": [
-704,
-16
],
"parameters": {
"mode": "raw",
"options": {},
"jsonOutput": "={\n \"Keyword\": \"{{ $('On form submission').item.json.Keyword }}\",\n \"URLs\": [\n \"{{ $json.organic_results[0].link }}\",\n \"{{ $json.organic_results[1].link }}\",\n \"{{ $json.organic_results[2].link }}\",\n \"{{ $json.organic_results[3].link }}\",\n \"{{ $json.organic_results[4].link }}\"\n ]\n}\n"
},
"typeVersion": 3.4
},
{
"id": "a663bc54-0591-49c5-8bb4-f3f5b97b1418",
"name": "Note autocollante 4",
"type": "n8n-nodes-base.stickyNote",
"position": [
-448,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Split out URLs\nThe previous node gives us one list with all the URLs. To scrape them all, we split them out into 5 different items"
},
"typeVersion": 1
},
{
"id": "bfb5461d-ea2f-428b-b622-3ab89b7d23b1",
"name": "Note autocollante 5",
"type": "n8n-nodes-base.stickyNote",
"position": [
-80,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Scrape URLS\nIn this node we make a request to the different URLs we scraped. For now, we just get all the HTML on the page.\n\n:bangbang: IMPORTANT :bangbang:\nUse batching and keep the amount of urls you're scraping in one go to around 5. We don't want to hammer sites with a bunch of requests and we don't want to get our IP blocked for sending too many requests. Respect the sites, and use a proper user agent."
},
"typeVersion": 1
},
{
"id": "963457c6-4c5f-4a73-a960-e0963b60e96f",
"name": "Note autocollante 6",
"type": "n8n-nodes-base.stickyNote",
"position": [
288,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Limit amount of articles\nDuring scraping, we expect some URLs to fail. This can be because the site is down, the site uses anti scraping measures, or there's just some glitch. \n\nIf we did manage to scrape all URLs, that's great, but we'll only be using 3. Otherwise it might not fit within the context window of the LLM"
},
"typeVersion": 1
},
{
"id": "7b212b7a-378b-40dd-8bf1-62ad070417cc",
"name": "Note autocollante 7",
"type": "n8n-nodes-base.stickyNote",
"position": [
656,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Extract body content\nWe don't need all the HTML, we just need the content, so we extract the <body> from the HTML we just scraped. This removes all the stylesheets and JS scripts from the HTML"
},
"typeVersion": 1
},
{
"id": "fbff0de8-1556-40c4-9eda-acdf2d9fb830",
"name": "Note autocollante 8",
"type": "n8n-nodes-base.stickyNote",
"position": [
1024,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Clean up text\nConverting our extracted HTML to markdown cleans up our extracted text by removing all the HTML tags and attributes. This again helps by reducing the amount of text we're passing onto the LLM\n"
},
"typeVersion": 1
},
{
"id": "0dce140c-8108-4d25-8e1f-0e9911b21fc1",
"name": "Extraire le contenu du corps",
"type": "n8n-nodes-base.html",
"position": [
768,
-32
],
"parameters": {
"options": {},
"operation": "extractHtmlContent",
"extractionValues": {
"values": [
{
"key": "html",
"cssSelector": "body",
"skipSelectors": "img, meta, a"
}
]
}
},
"typeVersion": 1.2
},
{
"id": "770a4308-2f09-45a6-a2c2-5f10275a53b9",
"name": "Note autocollante 9",
"type": "n8n-nodes-base.stickyNote",
"position": [
1392,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Put all articles into one item\nWe take our different items containing the scraped content and put it all into one list. This just makes it easier to work with."
},
"typeVersion": 1
},
{
"id": "0b0b8316-ded3-4767-b33d-e9290ee577dc",
"name": "Regrouper tous les articles en un élément",
"type": "n8n-nodes-base.aggregate",
"position": [
1520,
-32
],
"parameters": {
"options": {},
"fieldsToAggregate": {
"fieldToAggregate": [
{
"fieldToAggregate": "markdown"
}
]
}
},
"typeVersion": 1
},
{
"id": "82bd491e-0ffe-4086-8ae5-4f6447be698b",
"name": "Note autocollante 10",
"type": "n8n-nodes-base.stickyNote",
"position": [
1760,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Create outline using AI\nFinally we pass all the articles to an LLM and tell it to look for similarities between all articles.\n\nThis node has some logic to check if the list indeed contains three articles. Let's say 3 out of 5 articles failed to scrape, we just tell the AI to only use the two that succeeded."
},
"typeVersion": 1
},
{
"id": "8e928b9b-20ee-41ae-a54c-dae57ffa6446",
"name": "Faites ce que vous voulez avec le plan",
"type": "n8n-nodes-base.noOp",
"position": [
2240,
-32
],
"parameters": {},
"typeVersion": 1
},
{
"id": "f0105a0d-d158-4237-a0d8-81cabee5c9d1",
"name": "Note autocollante 11",
"type": "n8n-nodes-base.stickyNote",
"position": [
2128,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Continue the workflow...\nWrite the outline to a Google doc, add another LLM to write the content, enrich it with search data. The world is your oyster."
},
"typeVersion": 1
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "d3321dc0-f0cf-48df-ba9b-8a709eace218",
"connections": {
"d301d0b4-04fa-432b-ac2b-46fdb852a58f": {
"main": [
[
{
"node": "0dce140c-8108-4d25-8e1f-0e9911b21fc1",
"type": "main",
"index": 0
}
]
]
},
"1f8fbe1d-2a1b-43f7-a2e0-efa684ea41dc": {
"main": [
[
{
"node": "0b0b8316-ded3-4767-b33d-e9290ee577dc",
"type": "main",
"index": 0
}
]
]
},
"bddced98-6feb-4327-a4a6-98e76ff0c6c3": {
"main": [
[
{
"node": "8e928b9b-20ee-41ae-a54c-dae57ffa6446",
"type": "main",
"index": 0
}
]
]
},
"965b2a4f-bb84-4cfc-892a-a9782dbb522d": {
"main": [
[
{
"node": "d301d0b4-04fa-432b-ac2b-46fdb852a58f",
"type": "main",
"index": 0
}
]
]
},
"6476facf-8f7c-4ef3-af4f-eea3e6124def": {
"main": [
[
{
"node": "965b2a4f-bb84-4cfc-892a-a9782dbb522d",
"type": "main",
"index": 0
}
]
]
},
"b334649f-b35b-420c-be22-4895232c385f": {
"main": [
[
{
"node": "bdf80a10-0dd4-42ba-965e-00cf94928106",
"type": "main",
"index": 0
}
]
]
},
"cac60180-c7cb-4c1b-82c8-65ef51c43dba": {
"main": [
[
{
"node": "b334649f-b35b-420c-be22-4895232c385f",
"type": "main",
"index": 0
}
]
]
},
"0dce140c-8108-4d25-8e1f-0e9911b21fc1": {
"main": [
[
{
"node": "1f8fbe1d-2a1b-43f7-a2e0-efa684ea41dc",
"type": "main",
"index": 0
}
]
]
},
"bdf80a10-0dd4-42ba-965e-00cf94928106": {
"main": [
[
{
"node": "6476facf-8f7c-4ef3-af4f-eea3e6124def",
"type": "main",
"index": 0
}
]
]
},
"0b0b8316-ded3-4767-b33d-e9290ee577dc": {
"main": [
[
{
"node": "bddced98-6feb-4327-a4a6-98e76ff0c6c3",
"type": "main",
"index": 0
}
]
]
}
}
}Comment utiliser ce workflow ?
Copiez le code de configuration JSON ci-dessus, créez un nouveau workflow dans votre instance n8n et sélectionnez "Importer depuis le JSON", collez la configuration et modifiez les paramètres d'authentification selon vos besoins.
Dans quelles scénarios ce workflow est-il adapté ?
Avancé - Création de contenu, IA Multimodale
Est-ce payant ?
Ce workflow est entièrement gratuit et peut être utilisé directement. Veuillez noter que les services tiers utilisés dans le workflow (comme l'API OpenAI) peuvent nécessiter un paiement de votre part.
Workflows recommandés
Robin Geuens
@robingeuensHi, I'm Robin. I'm a veteran marketer of 7 years and love automation. If you have any questions about automation feel free to reach out to me on LinkedIn: https://www.linkedin.com/in/rgeuens/
Partager ce workflow