Meine Kund:innen-Intelligenz-Engine

Experte

Dies ist ein Market Research, AI Summarization-Bereich Automatisierungsworkflow mit 25 Nodes. Hauptsächlich werden Code, Gmail, Merge, Reddit, SerpApi und andere Nodes verwendet. Kundenpain-Point-Analyse und AI-Briefing mit Anthropic, Reddit, X und SerpAPI

Voraussetzungen
  • Google-Konto + Gmail API-Anmeldedaten
  • Möglicherweise sind Ziel-API-Anmeldedaten erforderlich
  • Google Sheets API-Anmeldedaten
Workflow-Vorschau
Visualisierung der Node-Verbindungen, mit Zoom und Pan
Workflow exportieren
Kopieren Sie die folgende JSON-Konfiguration und importieren Sie sie in n8n
{
  "id": "PgYCSNco4ZUxp1Qf",
  "meta": {
    "instanceId": "766c8c237f056d009bb1bd53e73633458b146427b1ae9c6ec10561694aa89152",
    "templateCredsSetupCompleted": true
  },
  "name": "My Customer Intelligence Engine",
  "tags": [],
  "nodes": [
    {
      "id": "b8dd0104-33f0-427b-907e-c71cd7366c39",
      "name": "Zusammenführen",
      "type": "n8n-nodes-base.merge",
      "position": [
        560,
        -112
      ],
      "parameters": {
        "numberInputs": 3
      },
      "typeVersion": 3.2
    },
    {
      "id": "e4b66790-4ffb-4605-a831-fe0a34c6745d",
      "name": "Filtern & Beschriften Reddit",
      "type": "n8n-nodes-base.code",
      "position": [
        176,
        256
      ],
      "parameters": {
        "jsCode": "\nreturn $input.all()\n    .filter(item => $input.first().json.score > 5) \n    .map(item => ({\n        json: {\n            title: $input.first().json.title,\n            text: $input.first().json.selftext,\n            source_url: $input.first().json.url,\n            source: 'Reddit',\n            full_text: `${item.json.title} ${item.json.selftext}`\n        }\n    }));"
      },
      "typeVersion": 2
    },
    {
      "id": "116053e4-cf92-4e90-a817-e54744a511c8",
      "name": "Kategorisierung & Stimmung",
      "type": "n8n-nodes-base.code",
      "position": [
        880,
        -96
      ],
      "parameters": {
        "jsCode": "return $input.all().map(item => {\n    const text = item.json.full_text.toLowerCase();\n    let pain_point = 'Uncategorized';\n    let sentiment = 0; \n\n    if (text.includes('wait') || text.includes('hold') || text.includes('busy signal')) {\n        pain_point = 'Call Hold/Availability';\n        sentiment -= 2;\n    } else if (text.includes('schedule') || text.includes('cancel') || text.includes('missed appointment')) {\n        pain_point = 'Scheduling Inefficiency';\n        sentiment -= 3;\n    } else if (text.includes('rude') || text.includes('snippy') || text.includes('unhelpful')) {\n        pain_point = 'Receptionist Tone/Quality';\n        sentiment -= 3;\n    } else if (text.includes('code') || text.includes('system') || text.includes('ai') || text.includes('automated')) {\n        pain_point = 'Automated System Frustration';\n        sentiment -= 1;\n    }\n\n    item.json.pain_point = pain_point;\n    item.json.sentiment_score = sentiment;\n    item.json.unique_key = pain_point + item.json.text.substring(0, 50).replace(/[^a-zA-Z0-9]/g, '');\n\n    return item;\n});"
      },
      "typeVersion": 2
    },
    {
      "id": "5ac35644-fdd6-42f8-9e6f-86ada4386e80",
      "name": "Deduplizieren, Zählen und Formatieren",
      "type": "n8n-nodes-base.code",
      "position": [
        1232,
        -96
      ],
      "parameters": {
        "jsCode": "const inputItems = $input.all();\n\nif (!Array.isArray(inputItems) || inputItems.length === 0) {\n    return []; \n}\n\n\nconst uniqueItems = new Map();\nconst painPointCounts = {};\nconst sourceCounts = {}; \nlet totalSentiment = 0; \n\nfor (const item of inputItems) {\n    if (!item.json || !item.json.unique_key) continue; \n    \n    if (!uniqueItems.has(item.json.unique_key)) {\n        uniqueItems.set(item.json.unique_key, item);\n        totalSentiment += item.json.sentiment_score;\n    }\n    \n    painPointCounts[item.json.pain_point] = (painPointCounts[item.json.pain_point] || 0) + 1;\n    sourceCounts[item.json.source] = (sourceCounts[item.json.source] || 0) + 1; \n}\n\nconst totalUniqueItems = uniqueItems.size;\nconst averageSentiment = totalUniqueItems > 0 ? totalSentiment / totalUniqueItems : 0;\nconst roundedAvgSentiment = Math.round(averageSentiment * 100) / 100;\n\n\nconst summaryString = `\nCurrent Market Intelligence Brief (Total Unique Complaints: ${totalUniqueItems}):\nAverage Sentiment Intensity (Lower is Worse): ${roundedAvgSentiment}\nPain Point Frequencies: ${JSON.stringify(painPointCounts, null, 2)}\nSource Distribution: ${JSON.stringify(sourceCounts, null, 2)}\nTop 5 Unique Complaint Examples: \n${Array.from(uniqueItems.values()).slice(0, 5).map(item => \n    `- [${item.json.pain_point}] \"${item.json.title?.substring(0, 60)}...\" (Source: ${item.json.source})`\n).join('\\n')}\n`;\n\n\nconst outputForSheets = [];\nconst painPointKeys = Object.keys(painPointCounts);\n\nif (painPointKeys.length > 0) {\n    for (const key of painPointKeys) {\n        const latestSourceItem = Array.from(uniqueItems.values()).find(item => item.json.pain_point === key);\n        \n        const sheetObject = {\n            'Pain_Point': key,\n            'Count': painPointCounts[key],\n            'Average_Sentiment': roundedAvgSentiment, \n            'Latest_Source': latestSourceItem?.json.source || 'N/A', \n            'LLM_SUMMARY_HOLDER': summaryString \n        };\n        \n        outputForSheets.push({ json: sheetObject });\n    }\n}\n\n\nreturn outputForSheets;"
      },
      "typeVersion": 2
    },
    {
      "id": "86de386e-02a5-401c-862b-f034ac530a7e",
      "name": "Filtern & Beschriften X",
      "type": "n8n-nodes-base.code",
      "position": [
        176,
        800
      ],
      "parameters": {
        "jsCode": "const rawTweets = $input.item.json.data || $input.item.json.tweets; \n\nif (!rawTweets || !Array.isArray(rawTweets)) {\n    return [];\n}\n\nreturn rawTweets\n    .map(tweet => ({\n        json: {\n            title: tweet.user ? 'Tweet from @' + tweet.user.screen_name : 'Tweet',\n            text: tweet.text || '', \n            source_url: tweet.url || `https://twitter.com/i/web/status/${tweet.id_str}`,\n            source: 'Twitter (External API)',\n            full_text: tweet.text || '',\n            favorite_count: tweet.favorite_count || 0,\n        }\n    }));"
      },
      "typeVersion": 2
    },
    {
      "id": "89ee7c6d-9af1-4406-9e55-1d555be8c69b",
      "name": "Zusammenfassung abrufen",
      "type": "n8n-nodes-base.code",
      "position": [
        1760,
        32
      ],
      "parameters": {
        "jsCode": "const previousNodeName = 'Deduplicate, Count, and Format'; \nconst previousNodeOutput = $items(previousNodeName, 0);\n\nif (!previousNodeOutput || previousNodeOutput.length === 0 || !previousNodeOutput[0].json) {\n    return [{ json: { summary: \"Error: Could not retrieve summary string from upstream node.\" } }];\n}\n\nconst summary = previousNodeOutput[0].json.LLM_SUMMARY_HOLDER || \"No market intelligence data found for this run.\"; \n\nif (summary.includes(\"No market intelligence data found\")) {\n    const cleanSummary = \"The system ran, but no relevant market complaints were found in the data sources. No brief is required.\";\n    return [{ json: { summary: cleanSummary } }];\n}\n\nreturn [{ json: { summary: summary } }];"
      },
      "typeVersion": 2
    },
    {
      "id": "871834d2-510f-4b1c-82bb-3159e123ea68",
      "name": "Suchen Reddit",
      "type": "n8n-nodes-base.reddit",
      "position": [
        -48,
        256
      ],
      "parameters": {
        "limit": 50,
        "filters": {},
        "operation": "getAll",
        "subreddit": "={{ $json.subreddit }}"
      },
      "credentials": {
        "redditOAuth2Api": {
          "id": "LQf5gOlHg5jrdJ68",
          "name": "Reddit account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "06e6eebe-a394-4135-99f3-d1d4da03d62a",
      "name": "Suchen Google",
      "type": "n8n-nodes-serpapi.serpApi",
      "position": [
        -272,
        -368
      ],
      "parameters": {
        "q": "HVAC company customer service problems",
        "location": "={{ $json[\"Location (where you want the search to originate):\"] }}",
        "requestOptions": {},
        "additionalFields": {
          "hl": "en",
          "num": "20"
        }
      },
      "credentials": {
        "serpApi": {
          "id": "DIWlQVDEFO4o8beb",
          "name": "SerpAPI account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "957b3e0e-4163-4bc4-a72b-538cc03c4158",
      "name": "Suchen X",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        -272,
        800
      ],
      "parameters": {
        "url": "https://api.twitterapi.io/twitter/tweet/advanced_search",
        "options": {},
        "sendQuery": true,
        "authentication": "genericCredentialType",
        "genericAuthType": "httpHeaderAuth",
        "queryParameters": {
          "parameters": [
            {
              "name": "query",
              "value": "={{ $json[\"Mention X (Twitter) Search Query:\"] }}"
            },
            {
              "name": "queryType",
              "value": "Latest"
            }
          ]
        }
      },
      "credentials": {
        "httpHeaderAuth": {
          "id": "Q7mjBI2AxTsUO2Xj",
          "name": "X/Twitter Demo"
        }
      },
      "typeVersion": 4.2
    },
    {
      "id": "03925ec9-70a1-4a8e-94f9-1cd358d119a4",
      "name": "Filtern & Beschriften Google",
      "type": "n8n-nodes-base.code",
      "position": [
        176,
        -368
      ],
      "parameters": {
        "jsCode": "return $input.all()\n    .flatMap(item => item.json.organic_results || []) \n    .map(result => ({\n        json: {\n            title: result.title,\n            text: result.snippet,\n            source_url: result.link,\n            source: 'SERP API (Web)',\n            full_text: `${result.title} ${result.snippet}`\n        }\n    }));"
      },
      "typeVersion": 2
    },
    {
      "id": "06b06b67-849a-4881-ba06-0ea2428789dd",
      "name": "Executive-E-Mail",
      "type": "@n8n/n8n-nodes-langchain.anthropic",
      "position": [
        1984,
        32
      ],
      "parameters": {
        "modelId": {
          "__rl": true,
          "mode": "list",
          "value": "claude-haiku-4-5-20251001",
          "cachedResultName": "claude-haiku-4-5-20251001"
        },
        "options": {
          "system": "You are a strategic AI product consultant tasked with generating a professional executive summary for a sales team. \n\n***CRITICAL CONSTRAINT: Do NOT use markdown code blocks (e.g., ```html or ```) or any external delimiters. Output ONLY the raw HTML content. Do NOT include <html>, <head>, or <body> tags.***\n\nYour response must strictly adhere to the following structure and formatting rules:\n\n<h1>AI Voice Agent Executive Summary – HVAC Customer Service Market</h1>\n\n<h2>OPPORTUNITY STATEMENT</h2>\n(Provide a two-sentence summary of the biggest market opportunity for the AI Voice Agent based on the data. Use <p> tags for paragraphs.)\n\n<hr>\n\n<h2>TOP 3 PAIN/SELLING POINTS</h2>\n(List the three highest-count pain points. For each point, the pain point must be formatted using a large **<h3> tag** and bold text: <b>1. [Pain Point Name]</b>. Provide a <b>corresponding AI Voice Agent Feature Suggestion</b> that solves that specific pain. Use HTML unordered lists <ul> for clean formatting of feature suggestions.)\n\n<hr>\n\n<h2>SOURCE TRUST ASSESSMENT</h2>\n(Provide a complete analysis of the data's reliability, ensuring the table is fully rendered and includes ALL THREE SOURCES (Reddit, Twitter, SERP API). \n\n***CRITICAL STYLING FIX: The table MUST be wrapped in a <div align=\"center\"> container. The table MUST have width=\"80%\" set as an attribute and the border=\"1\" attribute.***\n\nUse a single HTML table for the analysis. For a clean, single-line border, the table MUST include the following inline style: **style=\"border-collapse: collapse; border: 1px solid black;\"**\n\nThe table header cells (<th>) MUST have the following combined inline style: **style=\"background-color: #f2f2f2; padding: 10px; text-align: center;\"** (This sets the background to light gray, adds padding, and center-aligns the text).\n\nThe table data cells (<td>) MUST have the following inline style: **style=\"padding: 10px;\"** (This adds padding to the regular data cells).\n\nThe table MUST include the following columns:\n1. <b>Data Source</b>\n2. <b>Volume</b> (Must include both **number of mentions** AND **percentage of total**—e.g., 200 mentions (87%))\n3. <b>Reliability Rating</b> (Use text and star emojis)\n4. <b>Context & Limitations</b>\n\nFollow the table with a final key insight summary paragraph.)"
        },
        "messages": {
          "values": [
            {
              "content": "=DATA TO ANALYZE:\n{{ $json.summary }}"
            }
          ]
        }
      },
      "credentials": {
        "anthropicApi": {
          "id": "CFRCoeDLcmUNPQT7",
          "name": "Anthropic account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "d2e6ff58-1af5-486c-aa84-52ab8e4f9824",
      "name": "E-Mail senden",
      "type": "n8n-nodes-base.gmail",
      "position": [
        2320,
        32
      ],
      "webhookId": "3ac6cf9d-8d4d-4055-86e8-9c82a7d13d41",
      "parameters": {
        "sendTo": "={{ $('Form').item.json[\"What is your Email?\"] }}",
        "message": "={{ $json.content[0].text }}",
        "options": {
          "appendAttribution": false
        },
        "subject": "=re: Market briefing for {{ $('Form').item.json[\"What is your Name?\"] }}"
      },
      "credentials": {
        "gmailOAuth2": {
          "id": "PJrUDp0UTEVBDOMT",
          "name": "Gmail account"
        }
      },
      "typeVersion": 2.1
    },
    {
      "id": "6d8947d0-ca54-4036-bb15-77152ed4bf8b",
      "name": "Suchdetails protokollieren",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        1760,
        -256
      ],
      "parameters": {
        "columns": {
          "value": {
            "Count": "={{ $json.Count }}",
            "Pain_Point": "={{ $json.Pain_Point }}",
            "Latest_Source": "={{ $json.Latest_Source }}",
            "Execution_Date": "={{ $now.format('dd/LL/yyyy') }}",
            "Average_Sentiment": "={{ $json.Average_Sentiment }}",
            "Summary_Sample_Example": "={{ $json.LLM_SUMMARY_HOLDER }}"
          },
          "schema": [
            {
              "id": "Execution_Date",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Execution_Date",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Latest_Source",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Latest_Source",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Pain_Point",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Pain_Point",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Count",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Count",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Average_Sentiment",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Average_Sentiment",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Summary_Sample_Example",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Summary_Sample_Example",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            }
          ],
          "mappingMode": "defineBelow",
          "matchingColumns": [],
          "attemptToConvertTypes": false,
          "convertFieldsToString": false
        },
        "options": {},
        "operation": "append",
        "sheetName": {
          "__rl": true,
          "mode": "list",
          "value": "gid=0",
          "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1q1Qw8LqRjn-RsDNkbrUJVw7UWfuqR-VInWYSikvYYnY/edit#gid=0",
          "cachedResultName": "Sheet1"
        },
        "documentId": {
          "__rl": true,
          "mode": "list",
          "value": "1q1Qw8LqRjn-RsDNkbrUJVw7UWfuqR-VInWYSikvYYnY",
          "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1q1Qw8LqRjn-RsDNkbrUJVw7UWfuqR-VInWYSikvYYnY/edit?usp=drivesdk",
          "cachedResultName": "Customer Intent"
        }
      },
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "jBABhxE6xPp1vBwL",
          "name": "Google Sheets account"
        }
      },
      "typeVersion": 4.7
    },
    {
      "id": "13cbdc50-d991-467c-bcb8-da40f150af79",
      "name": "Formular",
      "type": "n8n-nodes-base.formTrigger",
      "position": [
        -720,
        -80
      ],
      "webhookId": "997fbb25-7733-4286-a5db-14fbce3ea11f",
      "parameters": {
        "options": {},
        "formTitle": "Customer intelligence Briefing",
        "formFields": {
          "values": [
            {
              "fieldLabel": "What is your Name?",
              "requiredField": true
            },
            {
              "fieldType": "email",
              "fieldLabel": "What is your Email?",
              "requiredField": true
            },
            {
              "fieldLabel": "Location (where you want the search to originate):",
              "placeholder": "United States",
              "requiredField": true
            },
            {
              "fieldLabel": "Mention SerpAPI (Google) Search Query:",
              "placeholder": "Refer to STICKY NOTE",
              "requiredField": true
            },
            {
              "fieldLabel": "Mention X (Twitter) Search Query:",
              "placeholder": "Refer to STICKY NOTE",
              "requiredField": true
            },
            {
              "fieldLabel": "Mention the #1 subreddit to search:",
              "placeholder": "Refer to STICKY NOTE",
              "requiredField": true
            },
            {
              "fieldLabel": "Mention the #2 subreddit to search:",
              "placeholder": "Refer to STICKY NOTE",
              "requiredField": true
            }
          ]
        },
        "formDescription": "The system will analyze the web, categorize complaints, and deliver a strategic executive summary to the sales team within minutes."
      },
      "typeVersion": 2.3
    },
    {
      "id": "4b2111ee-09bd-45f7-b8b7-81b37403f4c1",
      "name": "Haftnotiz",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -1168,
        -128
      ],
      "parameters": {
        "width": 624,
        "height": 736,
        "content": "## Trigger: Form\n\n\n\n\n\n## 🔗 Search Query Guide - Form\n\n\nUse this guide to construct highly targeted queries for each source, ensuring maximum signal clarity.\n\n### 1\\. SERP API (Google) Query\n  \n*   **Example Query:** \"HVAC company customer service problems\"\n    \n*   **Pro Tip:** Use quotes to search for exact phrases and add modifiers like \"solution\" or \"inefficiency\".\n    \n*   **Referenced Link:** [https://serpapi.com/search-api](https://serpapi.com/search-api)\n    \n\n### 2\\. X (Twitter) Search Query\n  \n*   **Example Query:** HVAC (\"customer service\" OR call OR scheduling) (bad OR terrible OR wait OR \"on hold\")\n    \n*   **Pro Tip:** Use parentheses for Boolean logic and avoid overly broad terms to prevent noise. This focuses on high-impact customer interactions.\n    \n*   **Referenced Link:** [https://github.com/igorbrigadir/twitter-advanced-search](https://github.com/igorbrigadir/twitter-advanced-search)\n    \n\n### 3\\. Subreddit Names (Reddit)\n*   **Example Input:** hvac, homeowners, mildlyinfuriating\n    \n*   **Important Note:** Make sure each subreddit exists and is spelled correctly."
      },
      "typeVersion": 1
    },
    {
      "id": "19ea5f7a-a156-4350-b243-a0b308d3e7e1",
      "name": "Haftnotiz 1",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -352,
        -672
      ],
      "parameters": {
        "color": 2,
        "width": 704,
        "height": 480,
        "content": "Data Stream Ingestion: SERP API\n----------------------------------------------------------\n\n1.  **SERP API Node:** \nThis node uses the form's **Target Keywords** to query Google for industry reports, articles, and authoritative commentary. This establishes the **Strategic Context Stream** to validate the systemic nature of the customer pain points.\n\n    \n2.  **Function Node (Filter & Label SERP):** \nThis code performs essential data cleanup. It extracts only the title, snippet (text), and link from the API output. **Crucially,** it applies the mandatory **source: 'SERP API (Web)'** label to normalize the data, preparing it for the Merge Node (Step 4)."
      },
      "typeVersion": 1
    },
    {
      "id": "97c6df85-6b40-4b14-afaa-9bc1a31daff9",
      "name": "Haftnotiz 2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -352,
        -128
      ],
      "parameters": {
        "color": 2,
        "width": 704,
        "height": 560,
        "content": "Data Stream Ingestion: Reddit\n------------------------------------------------------------\n\n1.  **Function Node (Create Subreddit List):** \nThis node takes the user's comma-separated list of subreddits from the form input and transforms the list into individual items. This action initiates the necessary looping structure, allowing the subsequent Reddit node to run once per subreddit name.\n\n    \n2.  **Reddit Node (Get Posts):** \nThis node executes iteratively, running once for every individual subreddit item received from the previous node. It dynamically fetches a batch of posts using the expression  {{ $json.subreddit }}.\n\n    \n3.  **Function Node (Filter & Label Reddit):** \nThis code performs two essential tasks: **filtering** (discarding low-engagement posts, score <= 5) and **labeling** (applying the **source: 'Reddit'** label) before standardizing the post structure for the Merge Node."
      },
      "typeVersion": 1
    },
    {
      "id": "cbe7c270-a0eb-4107-ac7d-1b0c5a1423c8",
      "name": "Listen-Ersteller",
      "type": "n8n-nodes-base.code",
      "position": [
        -272,
        256
      ],
      "parameters": {
        "jsCode": "return [\n    { json: { subreddit: $input.first().json[\"Mention the #1 subreddit to search:\"] } },\n    { json: { subreddit: $input.first().json[\"Mention the #2 subreddit to search:\"] } },\n];"
      },
      "typeVersion": 2
    },
    {
      "id": "3fc140dd-d940-46ba-a93c-07f619c5b517",
      "name": "Haftnotiz 3",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -352,
        496
      ],
      "parameters": {
        "color": 2,
        "width": 704,
        "height": 480,
        "content": "Data Stream Ingestion: X\n------------------------------------------------------------\n\n1.  **HTTP Request Node (Fetch Tweets):** \nThis node uses an external API (twitterapi.io) via an HTTP call to query X/Twitter with the **Target Keywords**. This configuration is essential to **bypass the restrictive rate limits (429)** and advanced query errors (400) of the standard API, ensuring reliable ingestion.\n\n    \n2.  **Function Node (Filter & Label X):** \nThis code acts as the final quality control. It **filters out low-engagement or noisy items** (like tweets with <= 5 likes) that the external API didn't handle. It then extracts, standardizes, and applies the **source: 'Twitter'** label, ensuring a clean, high-signal social stream for the Merge Node.\n    \n\n\n\n    \n\n\n"
      },
      "typeVersion": 1
    },
    {
      "id": "fb6c3b6d-5c68-4760-8741-ad604f1bbaea",
      "name": "Haftnotiz 4",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        800,
        -560
      ],
      "parameters": {
        "color": 2,
        "width": 624,
        "height": 656,
        "content": "Core Analytical Engines\n----------------------------------------------------------\n    \n\n1.  **Categorization & Sentiment:** \nThis node applies the **Hybrid Categorization Logic** to the entire, merged dataset. It uses sophisticated, custom keyword scoring (instead of expensive LLM calls) to assess the intent of each complaint and assign a precise **pain\\_point category** (e.g., 'Automated System Frustration') and a **sentiment\\_score**. This creates the clean labels necessary for final aggregation.\n\n    \n1.  **Deduplicate, Count, and Format:** \nThis node acts as the final statistical processor and dispatcher. It performs:\n    \n    *   **Deduplication:** It discards redundant complaints using the unique keys generated in the previous step.\n        \n    *   **Aggregation:** It calculates the total **Count** and **Average\\_Sentiment** for each unique Pain\\_Point.\n        \n    *   **Dispatch Preparation:** It formats the statistical data into a structure suitable for Google Sheets and simultaneously calculates and stores the comprehensive **summaryString** required for the downstream LLM/Email delivery via the serial route."
      },
      "typeVersion": 1
    },
    {
      "id": "1f4ea8d1-93de-42be-ad0b-eb3b84e91622",
      "name": "Haftnotiz 5",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1680,
        -16
      ],
      "parameters": {
        "width": 848,
        "height": 688,
        "content": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFinal Delivery System: LLM Briefing & Dispatch\n----------------------------------------------------------\n\n1.  **Function (Get LLM Summary):** \nThis node is the **data retrieval bridge** for the serial route. It uses the helper function **$items()** to reach back into the workflow's memory and retrieve the complete **summaryString** (containing all metrics and examples) from the preceding node. It includes essential **safeguards** to pass a clean \"No Data Found\" message if necessary.\n\n    \n2.  **Message a model (Anthropic):** \nThis is the **Intelligence Generator**. It takes the summary string from the previous node and, using a strict **System Prompt**, transforms the raw data into a professional, formatted **HTML executive brief**. It applies strategic analysis, generates the Opportunity Statement, Top 3 Selling Points, and Source Trust Assessment.\n\n3.  **Gmail (Send a message):** \nThis node is the **Final Dispatcher**. It retrieves the pure HTML content generated by the LLM and sends it to the target sales team recipients. The email is formatted as **HTML** to ensure the specialized tables, headers, and bullet points generated by the LLM render correctly and professionally."
      },
      "typeVersion": 1
    },
    {
      "id": "c9f47226-c11e-499c-9246-64960bb93a2a",
      "name": "Haftnotiz 6",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1680,
        -560
      ],
      "parameters": {
        "width": 848,
        "height": 480,
        "content": "\nLog Search Detail Node\n----------------------------------------------------------\nThis node captures the **raw input parameters** submitted by the user in the form trigger (e.g., Target Keywords, Target Subreddits) and stores them in the workflow's memory or a dedicated log.\n\n\n### **Function:** \nIt creates an **audit trail**, allowing you to:\n\n*   **Reproduce the Search:** Easily check the exact conditions used for any given intelligence brief.\n        \n*   **Debugging:** Quickly identify if a flawed search query caused a downstream ingestion error.\n\n\n"
      },
      "typeVersion": 1
    },
    {
      "id": "80550bfb-1b83-4c0f-8b40-ae1dd688b303",
      "name": "Haftnotiz 12",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -2016,
        -560
      ],
      "parameters": {
        "color": 5,
        "width": 624,
        "height": 368,
        "content": "# Hello there!\n\n\n\n# Need support or have a workflow question?  \n\n## 🔗 [Connect on LinkedIn](https://www.linkedin.com/in/bhuvaneshhhh/)"
      },
      "typeVersion": 1
    },
    {
      "id": "6b8bd8d3-b3b4-47d9-bb3b-60c7beee3321",
      "name": "Haftnotiz 7",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -1280,
        -768
      ],
      "parameters": {
        "color": 7,
        "width": 3920,
        "height": 1856,
        "content": ""
      },
      "typeVersion": 1
    },
    {
      "id": "ba1380c4-3024-4e7d-97ad-2326a07814d0",
      "name": "Haftnotiz 8",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -2016,
        -128
      ],
      "parameters": {
        "width": 624,
        "height": 736,
        "content": "# Customer Intelligence Engine\n\n## How it Works?\n\n## 1. Data Ingestion and Fusion\nThe workflow initiates upon a form submission and immediately splits into three parallel branches to fetch data simultaneously, ensuring speed: the SERP API pulls authoritative web commentary, Reddit runs a structured loop to gather community complaints, and an HTTP Request accesses the X/Twitter API for real-time sentiment. This data is then cleaned, labeled with its source, and combined into a single unified stream by the Merge Node (Mode: Append).\n\n## 2. Core Analysis and Aggregation\nThe unified data stream is passed to the Hybrid Categorization Function Node, which applies custom JavaScript logic (Layer 1: High-Confidence Matching; Layer 2: Semantic Similarity Scoring) to classify each complaint accurately into specific Pain Points (e.g., 'Automated System Frustration') without costly per-item LLM calls. The next Deduplicate, Count, and Format Node performs the final aggregation, counting the frequency of each pain point and storing the entire analysis string within the data payload for later retrieval.\n\n## 3. Final Dispatch (Serial Route)\nThe final delivery uses an alternate serial route to ensure stability: the aggregated statistical data is first logged directly to Google Sheets. Simultaneously, a subsequent Function Node (Get LLM Summary) retrieves the stored analysis string from memory. This string is then fed to the Message a model (Anthropic) Node, which acts as the Intelligence Generator, transforming the raw metrics into a professional, formatted HTML Executive Brief that is finally dispatched via the Gmail Node."
      },
      "typeVersion": 1
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "versionId": "bc1a0b84-4826-47a8-a194-343832b3dfb5",
  "connections": {
    "13cbdc50-d991-467c-bcb8-da40f150af79": {
      "main": [
        [
          {
            "node": "06e6eebe-a394-4135-99f3-d1d4da03d62a",
            "type": "main",
            "index": 0
          },
          {
            "node": "cbe7c270-a0eb-4107-ac7d-1b0c5a1423c8",
            "type": "main",
            "index": 0
          },
          {
            "node": "957b3e0e-4163-4bc4-a72b-538cc03c4158",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "b8dd0104-33f0-427b-907e-c71cd7366c39": {
      "main": [
        [
          {
            "node": "116053e4-cf92-4e90-a817-e54744a511c8",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "957b3e0e-4163-4bc4-a72b-538cc03c4158": {
      "main": [
        [
          {
            "node": "86de386e-02a5-401c-862b-f034ac530a7e",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "89ee7c6d-9af1-4406-9e55-1d555be8c69b": {
      "main": [
        [
          {
            "node": "06b06b67-849a-4881-ba06-0ea2428789dd",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "cbe7c270-a0eb-4107-ac7d-1b0c5a1423c8": {
      "main": [
        [
          {
            "node": "871834d2-510f-4b1c-82bb-3159e123ea68",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "06e6eebe-a394-4135-99f3-d1d4da03d62a": {
      "main": [
        [
          {
            "node": "03925ec9-70a1-4a8e-94f9-1cd358d119a4",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "871834d2-510f-4b1c-82bb-3159e123ea68": {
      "main": [
        [
          {
            "node": "e4b66790-4ffb-4605-a831-fe0a34c6745d",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "06b06b67-849a-4881-ba06-0ea2428789dd": {
      "main": [
        [
          {
            "node": "d2e6ff58-1af5-486c-aa84-52ab8e4f9824",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "86de386e-02a5-401c-862b-f034ac530a7e": {
      "main": [
        [
          {
            "node": "b8dd0104-33f0-427b-907e-c71cd7366c39",
            "type": "main",
            "index": 2
          }
        ]
      ]
    },
    "6d8947d0-ca54-4036-bb15-77152ed4bf8b": {
      "main": [
        []
      ]
    },
    "03925ec9-70a1-4a8e-94f9-1cd358d119a4": {
      "main": [
        [
          {
            "node": "b8dd0104-33f0-427b-907e-c71cd7366c39",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "e4b66790-4ffb-4605-a831-fe0a34c6745d": {
      "main": [
        [
          {
            "node": "b8dd0104-33f0-427b-907e-c71cd7366c39",
            "type": "main",
            "index": 1
          }
        ]
      ]
    },
    "116053e4-cf92-4e90-a817-e54744a511c8": {
      "main": [
        [
          {
            "node": "5ac35644-fdd6-42f8-9e6f-86ada4386e80",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "5ac35644-fdd6-42f8-9e6f-86ada4386e80": {
      "main": [
        [
          {
            "node": "89ee7c6d-9af1-4406-9e55-1d555be8c69b",
            "type": "main",
            "index": 0
          },
          {
            "node": "6d8947d0-ca54-4036-bb15-77152ed4bf8b",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}
Häufig gestellte Fragen

Wie verwende ich diesen Workflow?

Kopieren Sie den obigen JSON-Code, erstellen Sie einen neuen Workflow in Ihrer n8n-Instanz und wählen Sie "Aus JSON importieren". Fügen Sie die Konfiguration ein und passen Sie die Anmeldedaten nach Bedarf an.

Für welche Szenarien ist dieser Workflow geeignet?

Experte - Marktforschung, KI-Zusammenfassung

Ist es kostenpflichtig?

Dieser Workflow ist völlig kostenlos. Beachten Sie jedoch, dass Drittanbieterdienste (wie OpenAI API), die im Workflow verwendet werden, möglicherweise kostenpflichtig sind.

Workflow-Informationen
Schwierigkeitsgrad
Experte
Anzahl der Nodes25
Kategorie2
Node-Typen10
Schwierigkeitsbeschreibung

Für fortgeschrittene Benutzer, komplexe Workflows mit 16+ Nodes

Autor
Bhuvanesh R

Bhuvanesh R

@bhuvanesh

I partner with businesses to design AI voice agents and automation systems. Let's connect on LinkedIn to discuss your AI integration and automation needs.

Externe Links
Auf n8n.io ansehen

Diesen Workflow teilen

Kategorien

Kategorien: 34