기업 온라인 이미지 모니터링기
고급
이것은Market Research, AI Summarization분야의자동화 워크플로우로, 17개의 노드를 포함합니다.주로 Set, Cron, Gmail, OpenAi, Reddit 등의 노드를 사용하며. AI 감정 분석 및 다중 플랫폼 추적을 사용한 일일 기업 온라인 이미지 모니터링
사전 요구사항
- •Google 계정 및 Gmail API 인증 정보
- •OpenAI API Key
워크플로우 미리보기
노드 연결 관계를 시각적으로 표시하며, 확대/축소 및 이동을 지원합니다
워크플로우 내보내기
다음 JSON 구성을 복사하여 n8n에 가져오면 이 워크플로우를 사용할 수 있습니다
{
"nodes": [
{
"name": "매일 아침 트리거 (오전 9시)",
"type": "n8n-nodes-base.cron",
"notes": {
"text": "### 1. Daily Morning Trigger (9 AM)\n\nThis `Cron` node triggers the workflow automatically every **day at 9:00 AM** (based on your n8n server's local time zone). This provides a regular daily digest of activity.\n\n**To change the schedule:** Adjust the 'Hour' and 'Minute' fields to your preferred time for receiving the report.",
"position": "right"
},
"position": [
240,
300
],
"parameters": {
"mode": "everyDay",
"value": {
"hour": [
9
],
"minute": [
0
]
},
"options": {}
},
"typeVersion": 1,
"id": "-9--0"
},
{
"name": "회사 세부 정보 설정",
"type": "n8n-nodes-base.set",
"notes": {
"text": "### 2. Set Company Details\n\nThis `Set` node defines the company you want to monitor and the keywords to search for.\n\n**Setup (CRITICAL!):**\n1. **`companyName`:** Change `YourCompanyNameHere` to the exact name of the company you are monitoring (e.g., 'Google', 'Coca-Cola'). This is used for the email subject and AI prompts.\n2. **`searchKeywords`:** List all relevant keywords, including common misspellings, product names, hashtags, or related terms. These will be used for searching news, Reddit, and YouTube. Example: `[\"Acme Corp\", \"AcmeCo\", \"#AcmeCorp\", \"AcmeSoftware\"]`",
"position": "right"
},
"position": [
480,
300
],
"parameters": {
"values": [
{
"name": "companyName",
"value": "YourCompanyNameHere"
},
{
"name": "searchKeywords",
"value": [
"YourCompanyNameHere",
"YourCompanyProduct",
"#YourCompanyTag"
]
}
],
"options": {}
},
"typeVersion": 2,
"id": "--1"
},
{
"name": "Google 뉴스 RSS 가져오기",
"type": "n8n-nodes-base.rssFeed",
"notes": {
"text": "### 3.1. Fetch Google News RSS\n\nThis `RSS Feed` node fetches news articles from Google News that mention your `companyName`.\n\n**Setup:**\n1. **URL:** It's pre-configured to search Google News using your `companyName`. You can adjust the `hl` (host language) and `gl` (geo location) parameters if needed.\n\n**Note:** Google News RSS is a good general news source, but it might not capture every single article across all sites.",
"position": "right"
},
"position": [
720,
100
],
"parameters": {
"url": "=https://news.google.com/rss/search?q={{ encodeURIComponent($node[\"Set Company Details\"].json.companyName) }}&hl=en-US&gl=US&ceid=US:en",
"options": {}
},
"typeVersion": 1,
"id": "Google-RSS--2"
},
{
"name": "병합을 위한 뉴스 준비",
"type": "n8n-nodes-base.function",
"notes": {
"text": "### 3.2. Prepare News for Merge\n\nThis `Function` node standardizes the news articles' data into a consistent format (`source`, `title`, `text`, `link`, `publishedAt`) for later merging and AI processing.\n\n**No configuration needed.**",
"position": "right"
},
"position": [
960,
100
],
"parameters": {
"options": {},
"function": "const preparedItems = [];\n\nfor (const item of items) {\n preparedItems.push({\n json: {\n source: 'News Article',\n title: item.json.title || 'N/A',\n text: item.json.contentSnippet || item.json.description || 'N/A',\n link: item.json.link || '#',\n publishedAt: item.json.isoDate || new Date().toISOString()\n }\n });\n}\n\nreturn preparedItems;"
},
"typeVersion": 1,
"id": "--3"
},
{
"name": "Reddit 게시물 검색",
"type": "n8n-nodes-base.reddit",
"notes": {
"text": "### 3.3. Search Reddit Posts\n\nThis `Reddit` node searches for posts containing your `searchKeywords`.\n\n**Setup:**\n1. **Reddit Credential:** Click 'Credentials' and select 'New Credential'. Choose 'Reddit OAuth2 API'. Follow n8n documentation for detailed steps to create a Reddit app (type 'script') and get your client ID and secret. You'll also need your Reddit username/password for authentication.\n2. **Query:** It's pre-configured to search using your `searchKeywords` joined by ' OR '.\n3. **Limit & Sort:** Adjust the number of results (`limit`) and `sort` order as needed.\n\n**Note:** Reddit API has rate limits, so be mindful of how often you run this if you increase the limit significantly.",
"position": "right"
},
"position": [
720,
300
],
"parameters": {
"query": "={{ $node[\"Set Company Details\"].json.searchKeywords.join(' OR ') }}",
"options": {
"sort": "hot",
"limit": 20
},
"resource": "post",
"operation": "search"
},
"credentials": {
"redditOAuth2Api": {
"id": "YOUR_REDDIT_CREDENTIAL_ID",
"resolve": false
}
},
"typeVersion": 1,
"id": "Reddit--4"
},
{
"name": "Reddit 병합 준비",
"type": "n8n-nodes-base.function",
"notes": {
"text": "### 3.4. Prepare Reddit for Merge\n\nThis `Function` node standardizes the Reddit posts' data, ensuring consistent `source`, `title`, `text`, `link`, and `publishedAt` fields for merging.\n\n**No configuration needed.**",
"position": "right"
},
"position": [
960,
300
],
"parameters": {
"options": {},
"function": "const preparedItems = [];\n\nfor (const item of items) {\n preparedItems.push({\n json: {\n source: 'Reddit Post',\n title: item.json.title || 'N/A',\n text: item.json.selftext || item.json.body || 'N/A',\n link: item.json.url || '#',\n publishedAt: new Date(item.json.created_utc * 1000).toISOString() // Convert Unix timestamp to ISO string\n }\n });\n}\n\nreturn preparedItems;"
},
"typeVersion": 1,
"id": "Reddit--5"
},
{
"name": "YouTube 동영상 검색",
"type": "n8n-nodes-base.youTube",
"notes": {
"text": "### 3.5. Search YouTube Videos\n\nThis `YouTube` node searches for videos mentioning your `searchKeywords`.\n\n**Setup:**\n1. **Google Credential:** Click 'Credentials' and select 'New Credential'. Choose 'Google OAuth2 API'. You'll need to enable the YouTube Data API v3 in your Google Cloud Project and configure OAuth2 credentials (Client ID/Secret) for desktop app/other. This can be complex; search n8n docs for 'Google OAuth2' for detailed steps.\n2. **Search:** Pre-configured to use your `searchKeywords`.\n3. **Limit & Order:** Adjust the number of results and order as needed.\n\n**Note:** YouTube API has quotas, so excessive searches might lead to temporary limits.",
"position": "right"
},
"position": [
720,
500
],
"parameters": {
"search": "={{ $node[\"Set Company Details\"].json.searchKeywords.join(' ') }}",
"options": {
"limit": 10,
"order": "relevance"
},
"resource": "video",
"operation": "list"
},
"credentials": {
"googleApi": {
"id": "YOUR_GOOGLE_API_CREDENTIAL_ID",
"resolve": false
}
},
"typeVersion": 1,
"id": "YouTube--6"
},
{
"name": "YouTube 병합 준비",
"type": "n8n-nodes-base.function",
"notes": {
"text": "### 3.6. Prepare YouTube for Merge\n\nThis `Function` node standardizes YouTube video data into the common format (`source`, `title`, `text`, `link`, `publishedAt`) for merging.\n\n**No configuration needed.**",
"position": "right"
},
"position": [
960,
500
],
"parameters": {
"options": {},
"function": "const preparedItems = [];\n\nfor (const item of items) {\n preparedItems.push({\n json: {\n source: 'YouTube Video',\n title: item.json.snippet.title || 'N/A',\n text: item.json.snippet.description || 'N/A',\n link: `https://www.youtube.com/watch?v=${item.json.id.videoId}` || '#',\n publishedAt: item.json.snippet.publishedAt || new Date().toISOString()\n }\n });\n}\n\nreturn preparedItems;"
},
"typeVersion": 1,
"id": "YouTube--7"
},
{
"name": "모든 언급 병합",
"type": "n8n-nodes-base.itemLists",
"notes": {
"text": "### 4. Merge All Mentions\n\nThis `Item Lists` node combines all prepared data from Google News, Reddit, and YouTube into a single, unified list. This is the stream of all potential mentions.\n\n**No configuration needed.**",
"position": "right"
},
"position": [
1200,
300
],
"parameters": {
"mode": "merge",
"options": {}
},
"typeVersion": 1,
"id": "--8"
},
{
"name": "SQLite: 테이블 존재 확인",
"type": "n8n-nodes-base.sqlite",
"notes": {
"text": "### 5.1. SQLite: Ensure Table Exists\n\nThis `SQLite` node ensures a local database table named `processed_mentions` exists. This table will store a hash of every mention seen, preventing duplicate notifications over time.\n\n**Setup:**\n* **Database:** `company_monitor` (this creates a file `company_monitor.db` in your n8n data directory).\n* **Query:** The `CREATE TABLE IF NOT EXISTS` query is pre-filled.\n\n**No further action needed**; this node runs automatically.",
"position": "right"
},
"position": [
1440,
220
],
"parameters": {
"query": "CREATE TABLE IF NOT EXISTS processed_mentions (link_hash TEXT PRIMARY KEY, source TEXT, title TEXT, link TEXT, processed_date TEXT)",
"database": "company_monitor"
},
"typeVersion": 1,
"id": "SQLite--9"
},
{
"name": "새 언급 필터링 (중복 제거)",
"type": "n8n-nodes-base.function",
"notes": {
"text": "### 5.2. Filter New Mentions (Deduplication)\n\nThis `Function` node generates a unique hash for each mention (based on its link/title) and then checks the `processed_mentions` database to see if it's already been seen. **Only truly new mentions are passed on.**\n\n**Setup:**\n* It uses `crypto` (built-in) for hashing.\n* It executes the 'SQLite: Check If Processed' node for each item.\n\n**Important:** If you change the `SQLite: Check If Processed` node's name, update it here. Also, ensure 'Run Once Per Item' is OFF for this Function node.",
"position": "right"
},
"position": [
1440,
380
],
"parameters": {
"options": {
"runOncePerItem": false
},
"function": "const crypto = require('crypto');\n\nconst newItems = [];\n\nfor (const item of items) {\n const link = item.json.link || item.json.title; // Use link or title as primary unique identifier\n const linkHash = crypto.createHash('md5').update(link).digest('hex');\n\n // Add the hash to the item for later use\n item.json.linkHash = linkHash;\n\n // Check if this item (based on linkHash) is already in the database\n const result = await n8n.getNodeParameter('SQLite: Check If Processed', 'query', { json: item.json }, { execute: true });\n\n if (result.length === 0) {\n // Item not found in DB, it's new\n newItems.push(item);\n }\n}\n\nreturn newItems;"
},
"typeVersion": 1,
"id": "--10"
},
{
"name": "SQLite: 처리 여부 확인",
"type": "n8n-nodes-base.sqlite",
"notes": {
"text": "### (Helper) SQLite: Check If Processed\n\nThis helper `SQLite` node is called by the 'Filter New Mentions' Function node to check if a specific `link_hash` already exists in the database.\n\n**No direct configuration needed**; it's managed by the Function node.",
"position": "right"
},
"position": [
1700,
380
],
"parameters": {
"query": "SELECT link_hash FROM processed_mentions WHERE link_hash = '{{ $json.linkHash }}'",
"database": "company_monitor"
},
"typeVersion": 1,
"id": "SQLite--11"
},
{
"name": "AI: 감정 분석 및 요약",
"type": "n8n-nodes-base.openAi",
"notes": {
"text": "### 6. AI: Analyze Sentiment & Summarize\n\nThis `OpenAI` node processes each *new* mention to determine its sentiment and generate a brief summary.\n\n**Setup:**\n1. **OpenAI Credential:** Select your OpenAI API Key credential.\n2. **Model:** `gpt-3.5-turbo` is selected. For more robust analysis, consider `gpt-4o`.\n3. **Prompt:** The system prompt guides the AI to output sentiment and summary in a structured JSON format, making it easy to parse in the next step.\n\n**Output:** Each item will have a `choices` field containing the AI's JSON output.",
"position": "right"
},
"position": [
1920,
300
],
"parameters": {
"model": "gpt-3.5-turbo",
"options": {},
"messages": [
{
"role": "system",
"content": "You are a sentiment analyzer and summarizer for company mentions. For the following text related to '{{ $node[\"Set Company Details\"].json.companyName }}', determine its overall sentiment (Positive, Negative, or Neutral) and provide a concise 2-sentence summary. Output in JSON format:\n\n{\n \"sentiment\": \"[Positive/Negative/Neutral]\",\n \"summary\": \"[Concise summary]\"\n}"
},
{
"role": "user",
"content": "Source: {{ $json.source }}\\nTitle: {{ $json.title }}\\nText: {{ $json.text }}"
}
]
},
"credentials": {
"openAiApi": {
"id": "YOUR_OPENAI_CREDENTIAL_ID",
"resolve": false
}
},
"typeVersion": 1,
"id": "AI--12"
},
{
"name": "AI 결과 처리 및 분류",
"type": "n8n-nodes-base.function",
"notes": {
"text": "### 7. Process AI Results & Categorize\n\nThis `Function` node parses the AI's JSON output and categorizes all *new* mentions by their sentiment (Positive, Negative, Neutral). It also handles potential AI parsing errors.\n\n**Output:** A single item containing categorized lists of new mentions (e.g., `positive: [...]`, `negative: [...]`).\n\n**No configuration needed.**",
"position": "right"
},
"position": [
2160,
300
],
"parameters": {
"options": {},
"function": "const results = {\n positive: [],\n neutral: [],\n negative: [],\n noAnalysis: []\n};\n\nfor (const item of items) {\n try {\n const aiOutput = JSON.parse(item.json.choices[0].message.content);\n const sentiment = aiOutput.sentiment ? aiOutput.sentiment.toLowerCase() : 'unknown';\n const summary = aiOutput.summary || 'No summary provided.';\n\n const data = {\n source: item.json.source,\n title: item.json.title,\n link: item.json.link,\n summary: summary,\n publishedAt: item.json.publishedAt,\n linkHash: item.json.linkHash // Pass the hash for database insertion\n };\n\n if (sentiment === 'positive') {\n results.positive.push(data);\n } else if (sentiment === 'negative') {\n results.negative.push(data);\n } else if (sentiment === 'neutral') {\n results.neutral.push(data);\n } else {\n results.noAnalysis.push(data);\n }\n } catch (e) {\n results.noAnalysis.push({ ...item.json, summary: `AI analysis failed: ${e.message}` });\n }\n}\n\nreturn [{ json: results }];"
},
"typeVersion": 1,
"id": "AI--13"
},
{
"name": "SQLite: 처리된 언급 기록",
"type": "n8n-nodes-base.sqlite",
"notes": {
"text": "### 8. SQLite: Record Processed Mentions\n\nThis `SQLite` node records all the *newly processed* mentions into your `processed_mentions` database. This is crucial for the deduplication step in future runs.\n\n**Setup:**\n* **Database:** `company_monitor`\n* **Query:** The `INSERT INTO` query is pre-filled, storing the unique hash and other details of the processed mention.\n\n**No further action needed**; it automatically stores the data.",
"position": "right"
},
"position": [
2400,
220
],
"parameters": {
"query": "INSERT INTO processed_mentions (link_hash, source, title, link, processed_date) VALUES ('{{ $json.linkHash }}', '{{ $json.source }}', '{{ $json.title }}', '{{ $json.link }}', '{{ new Date().toISOString() }}')",
"database": "company_monitor"
},
"typeVersion": 1,
"id": "SQLite--14"
},
{
"name": "보고서 이메일 형식 지정",
"type": "n8n-nodes-base.function",
"notes": {
"text": "### 9. Format Report Email\n\nThis `Function` node constructs the final email body. It groups mentions by sentiment and provides summaries and links for each. It also handles the case where no *new* mentions were found.\n\n**Customization:**\n* You can adjust the headings, introduction, or the level of detail for each mention.\n* The email body uses Markdown for formatting (bolding, lists).\n\n**No configuration needed.**",
"position": "right"
},
"position": [
2400,
380
],
"parameters": {
"options": {},
"function": "const data = items[0].json;\nconst companyName = $node[\"Set Company Details\"].json.companyName;\n\nlet emailBody = `Good morning! Here's your daily online presence report for **${companyName}**.\\n\\n`;\n\n// Helper to format a list of mentions\nconst formatMentions = (mentions, heading) => {\n if (mentions.length === 0) return '';\n\n let formatted = `### ${heading} (${mentions.length} mentions)\\n`;\n mentions.forEach(m => {\n formatted += `* **[${m.source}]** ${m.title}\\n` +\n ` Summary: ${m.summary}\\n` +\n ` Link: ${m.link}\\n` +\n ` (Published: ${new Date(m.publishedAt).toLocaleDateString()})\\n\\n`;\n });\n return formatted;\n};\n\nemailBody += formatMentions(data.positive, 'Positive Mentions');\nemailBody += formatMentions(data.neutral, 'Neutral Mentions');\nemailBody += formatMentions(data.negative, 'Negative Mentions');\nemailBody += formatMentions(data.noAnalysis, 'Mentions with No Analysis / Error');\n\n\n// Check if any new mentions were processed\nif (data.positive.length + data.neutral.length + data.negative.length + data.noAnalysis.length === 0) {\n emailBody = `Good morning! No *new* online mentions found for **${companyName}** today.`;\n}\n\nreturn [{ json: { emailSubject: `📊 Daily Online Presence Report for ${companyName}`, emailBody: emailBody } }];"
},
"typeVersion": 1,
"id": "--15"
},
{
"name": "보고서 이메일 전송",
"type": "n8n-nodes-base.gmail",
"notes": {
"text": "### 10. Send Report Email\n\nThis `Gmail` node sends the compiled daily online presence report to your mailbox.\n\n**Setup:**\n1. **Gmail Credential:** Select your Gmail API credential.\n2. **From Email:** Enter your Gmail address (must match the authenticated account).\n3. **To Email:** **IMPORTANT: Change `YOUR_RECIPIENT_EMAIL@example.com` to your actual email address!**\n4. **Subject & Text:** These fields pull the formatted subject and body from the previous 'Format Report Email' node.\n\n**Test this node by running the workflow** to ensure you receive the email.",
"position": "right"
},
"position": [
2640,
300
],
"parameters": {
"text": "={{ $json.emailBody }}",
"options": {},
"subject": "={{ $json.emailSubject }}",
"toEmail": "YOUR_RECIPIENT_EMAIL@example.com",
"fromEmail": "YOUR_GMAIL_EMAIL@gmail.com"
},
"credentials": {
"gmailApi": {
"id": "YOUR_GMAIL_CREDENTIAL_ID",
"resolve": false
}
},
"typeVersion": 2,
"id": "--16"
}
],
"pinData": {},
"version": 1,
"connections": {
"--8": {
"main": [
[
{
"node": "--10",
"type": "main"
}
]
]
},
"--15": {
"main": [
[
{
"node": "--16",
"type": "main"
}
]
]
},
"Reddit--4": {
"main": [
[
{
"node": "Reddit--5",
"type": "main"
}
]
]
},
"--1": {
"main": [
[
{
"node": "Google-RSS--2",
"type": "main"
},
{
"node": "Reddit--4",
"type": "main"
},
{
"node": "YouTube--6",
"type": "main"
}
]
]
},
"Google-RSS--2": {
"main": [
[
{
"node": "--3",
"type": "main"
}
]
]
},
"YouTube--6": {
"main": [
[
{
"node": "YouTube--7",
"type": "main"
}
]
]
},
"--3": {
"main": [
[
{
"node": "--8",
"type": "main"
}
]
]
},
"Reddit--5": {
"main": [
[
{
"node": "--8",
"type": "main"
}
]
]
},
"YouTube--7": {
"main": [
[
{
"node": "--8",
"type": "main"
}
]
]
},
"SQLite--9": {
"main": [
[
{
"node": "--10",
"type": "main"
}
]
]
},
"-9--0": {
"main": [
[
{
"node": "--1",
"type": "main"
}
]
]
},
"AI--13": {
"main": [
[
{
"node": "SQLite--14",
"type": "main"
},
{
"node": "--15",
"type": "main"
}
]
]
},
"AI--12": {
"main": [
[
{
"node": "AI--13",
"type": "main"
}
]
]
},
"SQLite--14": {
"main": [
[]
]
},
"--10": {
"main": [
[
{
"node": "AI--12",
"type": "main"
}
]
],
"output": [
{
"type": "item",
"toIndex": 0,
"fromIndex": 0,
"destination": [
{
"node": "SQLite: 처리 여부 확인",
"input": "input1"
}
]
}
]
}
}
}자주 묻는 질문
이 워크플로우를 어떻게 사용하나요?
위의 JSON 구성 코드를 복사하여 n8n 인스턴스에서 새 워크플로우를 생성하고 "JSON에서 가져오기"를 선택한 후, 구성을 붙여넣고 필요에 따라 인증 설정을 수정하세요.
이 워크플로우는 어떤 시나리오에 적합한가요?
고급 - 시장 조사, AI 요약
유료인가요?
이 워크플로우는 완전히 무료이며 직접 가져와 사용할 수 있습니다. 다만, 워크플로우에서 사용하는 타사 서비스(예: OpenAI API)는 사용자 직접 비용을 지불해야 할 수 있습니다.
관련 워크플로우 추천
OpenAI와 Gmail을 사용하는 매일 긍정적인 뉴스 요약
OpenAI와 Gmail을 사용한 매일 긍정적인 뉴스 요약
If
Cron
Gmail
+
If
Cron
Gmail
9 노드Piotr Sobolewski
개인 생산성
AI 기반 제품 연구 및 가격 비교 - Google 검색 및 OpenAI
Google 검색과 OpenAI를 활용한 AI 기반 제품 연구 및 가격 비교
Set
Gmail
Open Ai
+
Set
Gmail
Open Ai
8 노드Piotr Sobolewski
시장 조사
Gmail과 GPT 요약을 사용한 일일 이메일 요약 자동 발송
Gmail 및 GPT 요약을 사용한 일일 이메일 다이제스트 자동 전송, 매일 오후 발송
Cron
Gmail
Open Ai
+
Cron
Gmail
Open Ai
6 노드Piotr Sobolewski
개인 생산성
자동화된 멀티플랫폼 게임 할인 추적
Deku Deals 및 Gmail 알림을 사용한 멀티 플랫폼 게임 할인 자동 추적
If
Cron
Gmail
+
If
Cron
Gmail
11 노드Piotr Sobolewski
개인 생산성
자동 웹 크롤러: 세분화된 구인/제품 모니터링 및 Telegram 알림
자동화网页爬虫:细分职位/제품모니터링与Telegram警报
If
Cron
Function
+
If
Cron
Function
6 노드Piotr Sobolewski
시장 조사
OpenAI와 Gmail을 사용하여 팟캐스트 트랙션 요약 및 키워드 생성
OpenAI와 Gmail을 사용하여 팟캐스트 트랙션 요약 및 키워드 생성
Set
Gmail
Open Ai
+
Set
Gmail
Open Ai
6 노드Piotr Sobolewski
콘텐츠 제작
워크플로우 정보
난이도
고급
노드 수17
카테고리2
노드 유형10
저자
Piotr Sobolewski
@piotrsobolewskiAI PhD with 7 years experience as a game dev CEO, currently teaching, helping others and building something new.
외부 링크
n8n.io에서 보기 →
이 워크플로우 공유