SerpAPI와 AI를 사용한 SEO 최적화 대纲 생성기
고급
이것은Content Creation, Multimodal AI분야의자동화 워크플로우로, 23개의 노드를 포함합니다.주로 Set, Html, Limit, Markdown, SplitOut 등의 노드를 사용하며. SerpAPI와 GPT-4o를 사용하여 Google 검색 결과에서 SEO 개요를 만들기
사전 요구사항
- •대상 API의 인증 정보가 필요할 수 있음
- •OpenAI API Key
워크플로우 미리보기
노드 연결 관계를 시각적으로 표시하며, 확대/축소 및 이동을 지원합니다
워크플로우 내보내기
다음 JSON 구성을 복사하여 n8n에 가져오면 이 워크플로우를 사용할 수 있습니다
{
"id": "ASCZ9YZJvNkHsxOF",
"meta": {
"instanceId": "b597f3db206729fea79938d01afc341ae2811ad29d941b24c1597c0110c58f12"
},
"name": "SEO optimized outline generator using SerpAPI & AI",
"tags": [],
"nodes": [
{
"id": "bddced98-6feb-4327-a4a6-98e76ff0c6c3",
"name": "아웃라인 생성",
"type": "@n8n/n8n-nodes-langchain.openAi",
"position": [
1808,
-32
],
"parameters": {
"modelId": {
"__rl": true,
"mode": "list",
"value": "gpt-4o",
"cachedResultName": "GPT-4O"
},
"options": {
"temperature": 0.7
},
"messages": {
"values": [
{
"content": "=Please create an outline for an article with the keyword: \"{{ $('Extract URLs from JSON').item.json.Keyword }}\".\n\nBelow are the 3 article you should base the outline on:\n\n===ARTICLE 1===\n{{ ($('Put all articles into one item').item.json.markdown[0] == null) ? \"\" : $('Put all articles into one item').item.json.markdown[0]}}\n===END ARTICLE 1===\n===ARTICLE 2===\n{{ ($('Put all articles into one item').item.json.markdown[1] == null) ? \"\" : $('Put all articles into one item').item.json.markdown[1]}}\n===END ARTICLE 2===\n===ARTICLE 3===\n{{ ($('Put all articles into one item').item.json.markdown[2] == null) ? \"\" : $('Put all articles into one item').item.json.markdown[2]}}\n===END ARTICLE 3"
},
{
"role": "system",
"content": "=You are an SEO Content Strategist AI. \nYour task is to analyze three provided articles and a keyword, and generate a **MECE-structured (mutually exclusive, collectively exhaustive), SEO-optimized outline** for a new article. \n\nFollow these rules precisely:\n\n---\n\n### INPUT:\n- Three full-length articles on a similar topic.\n- One keyword (the target topic for SEO).\n\n---\n\n### TASKS:\n\n1. **Analyze the Articles:**\n - Extract all factual information.\n - Identify recurring themes, structures, headings, and subtopics across all three.\n - Do **not** fabricate or replace facts with your own.\n - Do **not** summarize—focus on content structure and topic relationships.\n\n2. **Keyword Integration:**\n - Use the given keyword to guide topic relevance.\n - Ensure every section in the outline directly supports the keyword.\n\n3. **Create a MECE Outline:**\n - Organize topics into high-level sections and sub-sections that are:\n - **Mutually Exclusive**: No overlapping content or themes.\n - **Collectively Exhaustive**: Together, the sections should fully cover the topic.\n\n4. **Optimize for SEO:**\n - Use heading structures (H1, H2, H3).\n - Incorporate natural language variants and LSI keywords when visible from the articles.\n - Focus on user search intent as inferred from article content.\n\n---\n\n### OUTPUT FORMAT:\n\n# SEO Article Outline: [Insert Keyword]\n\n## Introduction\n- Brief overview of the topic\n- Importance of the keyword/topic\n\n## [Section 1 Title]\n- [Subtopic 1.1]\n- [Subtopic 1.2]\n\n## [Section 2 Title]\n- [Subtopic 2.1]\n- [Subtopic 2.2]\n\n...\n\n## Conclusion\n- Recap key sections\n- Reinforce keyword/topic importance\n\n---\n\n### IMPORTANT:\n- No hallucination. Stick strictly to the data in the 3 articles.\n- Do not introduce new facts or opinions.\n- Every section should clearly trace back to information present in the source material."
}
]
}
},
"credentials": {
"openAiApi": {
"id": "R5mcnwXQyAqWYNFn",
"name": "OpenAi account - SOAX"
}
},
"retryOnFail": true,
"typeVersion": 1.8
},
{
"id": "b334649f-b35b-420c-be22-4895232c385f",
"name": "검색 결과 가져오기",
"type": "n8n-nodes-base.httpRequest",
"position": [
-1056,
-16
],
"parameters": {
"url": "https://serpapi.com/search",
"options": {},
"sendQuery": true,
"authentication": "predefinedCredentialType",
"queryParameters": {
"parameters": [
{
"name": "q",
"value": "={{ $json.Keyword }} -inurl:reddit.com -inurl:quora.com"
},
{
"name": "location",
"value": "United States"
}
]
},
"nodeCredentialType": "serpApi"
},
"credentials": {
"serpApi": {
"id": "YAoSR4cn13A9KovZ",
"name": "SerpAPI account"
}
},
"retryOnFail": true,
"typeVersion": 4.2
},
{
"id": "6476facf-8f7c-4ef3-af4f-eea3e6124def",
"name": "URL 분리",
"type": "n8n-nodes-base.splitOut",
"position": [
-336,
-16
],
"parameters": {
"options": {},
"fieldToSplitOut": "URLs"
},
"typeVersion": 1
},
{
"id": "1f8fbe1d-2a1b-43f7-a2e0-efa684ea41dc",
"name": "텍스트 정리",
"type": "n8n-nodes-base.markdown",
"position": [
1136,
-32
],
"parameters": {
"html": "={{ $json.html }}",
"options": {},
"destinationKey": "markdown"
},
"typeVersion": 1
},
{
"id": "965b2a4f-bb84-4cfc-892a-a9782dbb522d",
"name": "콘텐츠 스크래핑",
"type": "n8n-nodes-base.httpRequest",
"onError": "continueErrorOutput",
"maxTries": 5,
"position": [
32,
-16
],
"parameters": {
"url": "={{ $json.URLs }}",
"options": {
"batching": {
"batch": {
"batchSize": 5
}
}
},
"sendHeaders": true,
"headerParameters": {
"parameters": [
{
"name": "User-Agent",
"value": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
},
{
"name": "Accept",
"value": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8"
},
{
"name": "Accept-Encoding",
"value": "gzip, deflate, br"
},
{
"name": "Accept-Language",
"value": "en-US,en;q=0.5"
}
]
}
},
"retryOnFail": true,
"typeVersion": 4.2
},
{
"id": "d301d0b4-04fa-432b-ac2b-46fdb852a58f",
"name": "제한",
"type": "n8n-nodes-base.limit",
"position": [
400,
-32
],
"parameters": {
"maxItems": 3
},
"typeVersion": 1
},
{
"id": "bc5e7c7e-6d6f-440a-85e9-4a4f13b6c377",
"name": "스티커 노트",
"type": "n8n-nodes-base.stickyNote",
"position": [
-2000,
-368
],
"parameters": {
"color": 7,
"width": 416,
"height": 848,
"content": "## Overview\nUse this workflow to create SEO-friendly outlines from high-ranking Google articles. \n\nEnter a keyword, and the workflow scrapes the top results, scrapes the content, analyzes it with AI, and builds a MECE (mutually exclusive, collectively exhaustive) outline. It’s useful for content creators and SEO specialists who want relevant, well-structured content.\n\n## How it works\n- Accepts a keyword submitted through a form\n- Uses the SerpAPI to get top Google results for a chosen country\n- Collects the top five URLs. We use five because we expect some to fail at the scraping stage\n- Scrapes each URL separately\n- Uses the first three articles to fit the AI model’s context window\n- Extracts the main text from the page body\n- Converts HTML to Markdown for cleaner text\n- Combines the cleaned text into a single list for AI processing\n- Analyzes the content with an AI language model to find common topics and headings\n- Generates an SEO-focused outline based on the most frequent topics\n\n## Setup steps\n1. Sign up for a [SerpAPI account](https://serpapi.com/) (free tier available)\n2. Create an [OpenAI account](https://openai.com/api/) and get an API key\n3. Set up your credential within N8N\n4. Run the workflow and enter your keyword\n5. The workflow will generate an SEO-friendly outline for your content"
},
"typeVersion": 1
},
{
"id": "cac60180-c7cb-4c1b-82c8-65ef51c43dba",
"name": "폼 제출 시",
"type": "n8n-nodes-base.formTrigger",
"position": [
-1440,
-16
],
"webhookId": "55fd7c6c-db02-47e1-9e8c-95fe24a5a5da",
"parameters": {
"options": {},
"formTitle": "SEO optimized outline generator",
"formFields": {
"values": [
{
"fieldLabel": "Keyword",
"placeholder": "Your keyword here",
"requiredField": true
}
]
},
"formDescription": "This workflow generates an SEO optimized outline based on what's already ranking"
},
"typeVersion": 2.3
},
{
"id": "c2f8b9ea-8401-4b15-90c7-c3cff105b028",
"name": "스티커 노트1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1552,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Form trigger\nAdd your keyword to the form and watch the magic happen :zap:"
},
"typeVersion": 1
},
{
"id": "5524c54e-6928-4683-8b96-245830a8161d",
"name": "스티커 노트2",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1184,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Scrape Google using SerpAPI\nIn this Node we're scraping Google and excluding forums and UGC sites like Quora. If you have a big site you might want to exclude your own domain as well by adding `-site:yourdomain.com`"
},
"typeVersion": 1
},
{
"id": "22af269c-6324-4df9-80b0-522f719f1f8a",
"name": "스티커 노트3",
"type": "n8n-nodes-base.stickyNote",
"position": [
-816,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Extract URLs from JSON\nSerpAPI gives us a lot of info we don't need. In this node we just clean up the results and only extract the URLs we'll use"
},
"typeVersion": 1
},
{
"id": "bdf80a10-0dd4-42ba-965e-00cf94928106",
"name": "JSON에서 URL 추출",
"type": "n8n-nodes-base.set",
"position": [
-704,
-16
],
"parameters": {
"mode": "raw",
"options": {},
"jsonOutput": "={\n \"Keyword\": \"{{ $('On form submission').item.json.Keyword }}\",\n \"URLs\": [\n \"{{ $json.organic_results[0].link }}\",\n \"{{ $json.organic_results[1].link }}\",\n \"{{ $json.organic_results[2].link }}\",\n \"{{ $json.organic_results[3].link }}\",\n \"{{ $json.organic_results[4].link }}\"\n ]\n}\n"
},
"typeVersion": 3.4
},
{
"id": "a663bc54-0591-49c5-8bb4-f3f5b97b1418",
"name": "스티커 노트4",
"type": "n8n-nodes-base.stickyNote",
"position": [
-448,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Split out URLs\nThe previous node gives us one list with all the URLs. To scrape them all, we split them out into 5 different items"
},
"typeVersion": 1
},
{
"id": "bfb5461d-ea2f-428b-b622-3ab89b7d23b1",
"name": "스티커 노트5",
"type": "n8n-nodes-base.stickyNote",
"position": [
-80,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Scrape URLS\nIn this node we make a request to the different URLs we scraped. For now, we just get all the HTML on the page.\n\n:bangbang: IMPORTANT :bangbang:\nUse batching and keep the amount of urls you're scraping in one go to around 5. We don't want to hammer sites with a bunch of requests and we don't want to get our IP blocked for sending too many requests. Respect the sites, and use a proper user agent."
},
"typeVersion": 1
},
{
"id": "963457c6-4c5f-4a73-a960-e0963b60e96f",
"name": "스티커 노트6",
"type": "n8n-nodes-base.stickyNote",
"position": [
288,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Limit amount of articles\nDuring scraping, we expect some URLs to fail. This can be because the site is down, the site uses anti scraping measures, or there's just some glitch. \n\nIf we did manage to scrape all URLs, that's great, but we'll only be using 3. Otherwise it might not fit within the context window of the LLM"
},
"typeVersion": 1
},
{
"id": "7b212b7a-378b-40dd-8bf1-62ad070417cc",
"name": "스티커 노트7",
"type": "n8n-nodes-base.stickyNote",
"position": [
656,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Extract body content\nWe don't need all the HTML, we just need the content, so we extract the <body> from the HTML we just scraped. This removes all the stylesheets and JS scripts from the HTML"
},
"typeVersion": 1
},
{
"id": "fbff0de8-1556-40c4-9eda-acdf2d9fb830",
"name": "스티커 노트8",
"type": "n8n-nodes-base.stickyNote",
"position": [
1024,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Clean up text\nConverting our extracted HTML to markdown cleans up our extracted text by removing all the HTML tags and attributes. This again helps by reducing the amount of text we're passing onto the LLM\n"
},
"typeVersion": 1
},
{
"id": "0dce140c-8108-4d25-8e1f-0e9911b21fc1",
"name": "본문 콘텐츠 추출",
"type": "n8n-nodes-base.html",
"position": [
768,
-32
],
"parameters": {
"options": {},
"operation": "extractHtmlContent",
"extractionValues": {
"values": [
{
"key": "html",
"cssSelector": "body",
"skipSelectors": "img, meta, a"
}
]
}
},
"typeVersion": 1.2
},
{
"id": "770a4308-2f09-45a6-a2c2-5f10275a53b9",
"name": "스티커 노트9",
"type": "n8n-nodes-base.stickyNote",
"position": [
1392,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Put all articles into one item\nWe take our different items containing the scraped content and put it all into one list. This just makes it easier to work with."
},
"typeVersion": 1
},
{
"id": "0b0b8316-ded3-4767-b33d-e9290ee577dc",
"name": "모든 글을 하나로 합치기",
"type": "n8n-nodes-base.aggregate",
"position": [
1520,
-32
],
"parameters": {
"options": {},
"fieldsToAggregate": {
"fieldToAggregate": [
{
"fieldToAggregate": "markdown"
}
]
}
},
"typeVersion": 1
},
{
"id": "82bd491e-0ffe-4086-8ae5-4f6447be698b",
"name": "스티커 노트10",
"type": "n8n-nodes-base.stickyNote",
"position": [
1760,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Create outline using AI\nFinally we pass all the articles to an LLM and tell it to look for similarities between all articles.\n\nThis node has some logic to check if the list indeed contains three articles. Let's say 3 out of 5 articles failed to scrape, we just tell the AI to only use the two that succeeded."
},
"typeVersion": 1
},
{
"id": "8e928b9b-20ee-41ae-a54c-dae57ffa6446",
"name": "아웃라인 원하는 방식으로 활용",
"type": "n8n-nodes-base.noOp",
"position": [
2240,
-32
],
"parameters": {},
"typeVersion": 1
},
{
"id": "f0105a0d-d158-4237-a0d8-81cabee5c9d1",
"name": "스티커 노트11",
"type": "n8n-nodes-base.stickyNote",
"position": [
2128,
-368
],
"parameters": {
"color": 7,
"width": 336,
"height": 848,
"content": "## Continue the workflow...\nWrite the outline to a Google doc, add another LLM to write the content, enrich it with search data. The world is your oyster."
},
"typeVersion": 1
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "d3321dc0-f0cf-48df-ba9b-8a709eace218",
"connections": {
"d301d0b4-04fa-432b-ac2b-46fdb852a58f": {
"main": [
[
{
"node": "0dce140c-8108-4d25-8e1f-0e9911b21fc1",
"type": "main",
"index": 0
}
]
]
},
"1f8fbe1d-2a1b-43f7-a2e0-efa684ea41dc": {
"main": [
[
{
"node": "0b0b8316-ded3-4767-b33d-e9290ee577dc",
"type": "main",
"index": 0
}
]
]
},
"bddced98-6feb-4327-a4a6-98e76ff0c6c3": {
"main": [
[
{
"node": "8e928b9b-20ee-41ae-a54c-dae57ffa6446",
"type": "main",
"index": 0
}
]
]
},
"965b2a4f-bb84-4cfc-892a-a9782dbb522d": {
"main": [
[
{
"node": "d301d0b4-04fa-432b-ac2b-46fdb852a58f",
"type": "main",
"index": 0
}
]
]
},
"6476facf-8f7c-4ef3-af4f-eea3e6124def": {
"main": [
[
{
"node": "965b2a4f-bb84-4cfc-892a-a9782dbb522d",
"type": "main",
"index": 0
}
]
]
},
"b334649f-b35b-420c-be22-4895232c385f": {
"main": [
[
{
"node": "bdf80a10-0dd4-42ba-965e-00cf94928106",
"type": "main",
"index": 0
}
]
]
},
"cac60180-c7cb-4c1b-82c8-65ef51c43dba": {
"main": [
[
{
"node": "b334649f-b35b-420c-be22-4895232c385f",
"type": "main",
"index": 0
}
]
]
},
"0dce140c-8108-4d25-8e1f-0e9911b21fc1": {
"main": [
[
{
"node": "1f8fbe1d-2a1b-43f7-a2e0-efa684ea41dc",
"type": "main",
"index": 0
}
]
]
},
"bdf80a10-0dd4-42ba-965e-00cf94928106": {
"main": [
[
{
"node": "6476facf-8f7c-4ef3-af4f-eea3e6124def",
"type": "main",
"index": 0
}
]
]
},
"0b0b8316-ded3-4767-b33d-e9290ee577dc": {
"main": [
[
{
"node": "bddced98-6feb-4327-a4a6-98e76ff0c6c3",
"type": "main",
"index": 0
}
]
]
}
}
}자주 묻는 질문
이 워크플로우를 어떻게 사용하나요?
위의 JSON 구성 코드를 복사하여 n8n 인스턴스에서 새 워크플로우를 생성하고 "JSON에서 가져오기"를 선택한 후, 구성을 붙여넣고 필요에 따라 인증 설정을 수정하세요.
이 워크플로우는 어떤 시나리오에 적합한가요?
고급 - 콘텐츠 제작, 멀티모달 AI
유료인가요?
이 워크플로우는 완전히 무료이며 직접 가져와 사용할 수 있습니다. 다만, 워크플로우에서 사용하는 타사 서비스(예: OpenAI API)는 사용자 직접 비용을 지불해야 할 수 있습니다.
관련 워크플로우 추천
GPT-4o와 Tavily를 사용하여 블로그 개요를 생성하고 풍부화합니다.
사용하여 GPT-4o와 Tavily로 데이터를 검색하여 사실에 기반한 블로그 개요를 생성합니다.
Set
Code
Split Out
+
Set
Code
Split Out
17 노드Robin Geuens
콘텐츠 제작
GPT-5와 fal.ai 이미지를 사용한 키워드에서 WordPress까지 자동화 SEO 블로그 프로세스
GPT-5 및 fal.ai 이미지를 사용한 키워드 to WordPress SEO 블로그 프로세스 자동화
Set
Code
Wait
+
Set
Code
Wait
96 노드Paul
콘텐츠 제작
WordPress 블로그 자동화 프로페셔널 에디션(심층 연구) v2.1 마켓
GPT-4o, Perplexity AI 및 다국어 지원을 사용한 SEO 최적화 블로그 생성 자동화
If
Set
Xml
+
If
Set
Xml
125 노드Daniel Ng
콘텐츠 제작
제 작업流程5
Gemini AI, 네트워크 검색 및 PDF 배달 기반의 전면적인 연구 보고서 생성기
Set
Code
Html
+
Set
Code
Html
102 노드Hichul
콘텐츠 제작
Perplexity와 GPT를 사용하여 WordPress에 SEO 최적화 블로그 생성, 키워드와 미디어 포함
Perplexity와 GPT를 사용하여 WordPress에 SEO 최적화 블로그를 만들어 키워드와 미디어 포함
Set
Code
Limit
+
Set
Code
Limit
124 노드Paul
콘텐츠 제작
특정 도구를 사용하여 WordPress에 SEO 최적화 블로그 생성
특정 도구를 사용하여 WordPress에 SEO 최적화 블로그 생성
Set
Code
Limit
+
Set
Code
Limit
124 노드Paul
콘텐츠 제작
워크플로우 정보
난이도
고급
노드 수23
카테고리2
노드 유형11
저자
Robin Geuens
@robingeuensHi, I'm Robin. I'm a veteran marketer of 7 years and love automation. If you have any questions about automation feel free to reach out to me on LinkedIn: https://www.linkedin.com/in/rgeuens/
외부 링크
n8n.io에서 보기 →
이 워크플로우 공유