Unternehmens-Online-Image-Überwachung

Experte

Dies ist ein Market Research, AI Summarization-Bereich Automatisierungsworkflow mit 17 Nodes. Hauptsächlich werden Set, Cron, Gmail, OpenAi, Reddit und andere Nodes verwendet. Tägliche Überwachung des Unternehmens-Online-Image mit KI-Sentimentanalyse und Mehrplattform-Tracking

Voraussetzungen
  • Google-Konto + Gmail API-Anmeldedaten
  • OpenAI API Key
Workflow-Vorschau
Visualisierung der Node-Verbindungen, mit Zoom und Pan
Workflow exportieren
Kopieren Sie die folgende JSON-Konfiguration und importieren Sie sie in n8n
{
  "nodes": [
    {
      "name": "Täglicher Morgen-Trigger (9 Uhr)",
      "type": "n8n-nodes-base.cron",
      "notes": {
        "text": "### 1. Daily Morning Trigger (9 AM)\n\nThis `Cron` node triggers the workflow automatically every **day at 9:00 AM** (based on your n8n server's local time zone). This provides a regular daily digest of activity.\n\n**To change the schedule:** Adjust the 'Hour' and 'Minute' fields to your preferred time for receiving the report.",
        "position": "right"
      },
      "position": [
        240,
        300
      ],
      "parameters": {
        "mode": "everyDay",
        "value": {
          "hour": [
            9
          ],
          "minute": [
            0
          ]
        },
        "options": {}
      },
      "typeVersion": 1,
      "id": "T-glicher-Morgen-Trigger-9-Uhr--0"
    },
    {
      "name": "Unternehmensdetails festlegen",
      "type": "n8n-nodes-base.set",
      "notes": {
        "text": "### 2. Set Company Details\n\nThis `Set` node defines the company you want to monitor and the keywords to search for.\n\n**Setup (CRITICAL!):**\n1.  **`companyName`:** Change `YourCompanyNameHere` to the exact name of the company you are monitoring (e.g., 'Google', 'Coca-Cola'). This is used for the email subject and AI prompts.\n2.  **`searchKeywords`:** List all relevant keywords, including common misspellings, product names, hashtags, or related terms. These will be used for searching news, Reddit, and YouTube. Example: `[\"Acme Corp\", \"AcmeCo\", \"#AcmeCorp\", \"AcmeSoftware\"]`",
        "position": "right"
      },
      "position": [
        480,
        300
      ],
      "parameters": {
        "values": [
          {
            "name": "companyName",
            "value": "YourCompanyNameHere"
          },
          {
            "name": "searchKeywords",
            "value": [
              "YourCompanyNameHere",
              "YourCompanyProduct",
              "#YourCompanyTag"
            ]
          }
        ],
        "options": {}
      },
      "typeVersion": 2,
      "id": "Unternehmensdetails-festlegen-1"
    },
    {
      "name": "Fetch Google News RSS",
      "type": "n8n-nodes-base.rssFeed",
      "notes": {
        "text": "### 3.1. Fetch Google News RSS\n\nThis `RSS Feed` node fetches news articles from Google News that mention your `companyName`.\n\n**Setup:**\n1.  **URL:** It's pre-configured to search Google News using your `companyName`. You can adjust the `hl` (host language) and `gl` (geo location) parameters if needed.\n\n**Note:** Google News RSS is a good general news source, but it might not capture every single article across all sites.",
        "position": "right"
      },
      "position": [
        720,
        100
      ],
      "parameters": {
        "url": "=https://news.google.com/rss/search?q={{ encodeURIComponent($node[\"Set Company Details\"].json.companyName) }}&hl=en-US&gl=US&ceid=US:en",
        "options": {}
      },
      "typeVersion": 1,
      "id": "Fetch-Google-News-RSS-2"
    },
    {
      "name": "News für Merge vorbereiten",
      "type": "n8n-nodes-base.function",
      "notes": {
        "text": "### 3.2. Prepare News for Merge\n\nThis `Function` node standardizes the news articles' data into a consistent format (`source`, `title`, `text`, `link`, `publishedAt`) for later merging and AI processing.\n\n**No configuration needed.**",
        "position": "right"
      },
      "position": [
        960,
        100
      ],
      "parameters": {
        "options": {},
        "function": "const preparedItems = [];\n\nfor (const item of items) {\n  preparedItems.push({\n    json: {\n      source: 'News Article',\n      title: item.json.title || 'N/A',\n      text: item.json.contentSnippet || item.json.description || 'N/A',\n      link: item.json.link || '#',\n      publishedAt: item.json.isoDate || new Date().toISOString()\n    }\n  });\n}\n\nreturn preparedItems;"
      },
      "typeVersion": 1,
      "id": "News-f-r-Merge-vorbereiten-3"
    },
    {
      "name": "Search Reddit Posts",
      "type": "n8n-nodes-base.reddit",
      "notes": {
        "text": "### 3.3. Search Reddit Posts\n\nThis `Reddit` node searches for posts containing your `searchKeywords`.\n\n**Setup:**\n1.  **Reddit Credential:** Click 'Credentials' and select 'New Credential'. Choose 'Reddit OAuth2 API'. Follow n8n documentation for detailed steps to create a Reddit app (type 'script') and get your client ID and secret. You'll also need your Reddit username/password for authentication.\n2.  **Query:** It's pre-configured to search using your `searchKeywords` joined by ' OR '.\n3.  **Limit & Sort:** Adjust the number of results (`limit`) and `sort` order as needed.\n\n**Note:** Reddit API has rate limits, so be mindful of how often you run this if you increase the limit significantly.",
        "position": "right"
      },
      "position": [
        720,
        300
      ],
      "parameters": {
        "query": "={{ $node[\"Set Company Details\"].json.searchKeywords.join(' OR ') }}",
        "options": {
          "sort": "hot",
          "limit": 20
        },
        "resource": "post",
        "operation": "search"
      },
      "credentials": {
        "redditOAuth2Api": {
          "id": "YOUR_REDDIT_CREDENTIAL_ID",
          "resolve": false
        }
      },
      "typeVersion": 1,
      "id": "Search-Reddit-Posts-4"
    },
    {
      "name": "Prepare Reddit for Merge",
      "type": "n8n-nodes-base.function",
      "notes": {
        "text": "### 3.4. Prepare Reddit for Merge\n\nThis `Function` node standardizes the Reddit posts' data, ensuring consistent `source`, `title`, `text`, `link`, and `publishedAt` fields for merging.\n\n**No configuration needed.**",
        "position": "right"
      },
      "position": [
        960,
        300
      ],
      "parameters": {
        "options": {},
        "function": "const preparedItems = [];\n\nfor (const item of items) {\n  preparedItems.push({\n    json: {\n      source: 'Reddit Post',\n      title: item.json.title || 'N/A',\n      text: item.json.selftext || item.json.body || 'N/A',\n      link: item.json.url || '#',\n      publishedAt: new Date(item.json.created_utc * 1000).toISOString() // Convert Unix timestamp to ISO string\n    }\n  });\n}\n\nreturn preparedItems;"
      },
      "typeVersion": 1,
      "id": "Prepare-Reddit-for-Merge-5"
    },
    {
      "name": "Search YouTube Videos",
      "type": "n8n-nodes-base.youTube",
      "notes": {
        "text": "### 3.5. Search YouTube Videos\n\nThis `YouTube` node searches for videos mentioning your `searchKeywords`.\n\n**Setup:**\n1.  **Google Credential:** Click 'Credentials' and select 'New Credential'. Choose 'Google OAuth2 API'. You'll need to enable the YouTube Data API v3 in your Google Cloud Project and configure OAuth2 credentials (Client ID/Secret) for desktop app/other. This can be complex; search n8n docs for 'Google OAuth2' for detailed steps.\n2.  **Search:** Pre-configured to use your `searchKeywords`.\n3.  **Limit & Order:** Adjust the number of results and order as needed.\n\n**Note:** YouTube API has quotas, so excessive searches might lead to temporary limits.",
        "position": "right"
      },
      "position": [
        720,
        500
      ],
      "parameters": {
        "search": "={{ $node[\"Set Company Details\"].json.searchKeywords.join(' ') }}",
        "options": {
          "limit": 10,
          "order": "relevance"
        },
        "resource": "video",
        "operation": "list"
      },
      "credentials": {
        "googleApi": {
          "id": "YOUR_GOOGLE_API_CREDENTIAL_ID",
          "resolve": false
        }
      },
      "typeVersion": 1,
      "id": "Search-YouTube-Videos-6"
    },
    {
      "name": "Prepare YouTube for Merge",
      "type": "n8n-nodes-base.function",
      "notes": {
        "text": "### 3.6. Prepare YouTube for Merge\n\nThis `Function` node standardizes YouTube video data into the common format (`source`, `title`, `text`, `link`, `publishedAt`) for merging.\n\n**No configuration needed.**",
        "position": "right"
      },
      "position": [
        960,
        500
      ],
      "parameters": {
        "options": {},
        "function": "const preparedItems = [];\n\nfor (const item of items) {\n  preparedItems.push({\n    json: {\n      source: 'YouTube Video',\n      title: item.json.snippet.title || 'N/A',\n      text: item.json.snippet.description || 'N/A',\n      link: `https://www.youtube.com/watch?v=${item.json.id.videoId}` || '#',\n      publishedAt: item.json.snippet.publishedAt || new Date().toISOString()\n    }\n  });\n}\n\nreturn preparedItems;"
      },
      "typeVersion": 1,
      "id": "Prepare-YouTube-for-Merge-7"
    },
    {
      "name": "Alle Erwähnungen zusammenführen",
      "type": "n8n-nodes-base.itemLists",
      "notes": {
        "text": "### 4. Merge All Mentions\n\nThis `Item Lists` node combines all prepared data from Google News, Reddit, and YouTube into a single, unified list. This is the stream of all potential mentions.\n\n**No configuration needed.**",
        "position": "right"
      },
      "position": [
        1200,
        300
      ],
      "parameters": {
        "mode": "merge",
        "options": {}
      },
      "typeVersion": 1,
      "id": "Alle-Erw-hnungen-zusammenf-hren-8"
    },
    {
      "name": "SQLite: Tabelle sicherstellen",
      "type": "n8n-nodes-base.sqlite",
      "notes": {
        "text": "### 5.1. SQLite: Ensure Table Exists\n\nThis `SQLite` node ensures a local database table named `processed_mentions` exists. This table will store a hash of every mention seen, preventing duplicate notifications over time.\n\n**Setup:**\n* **Database:** `company_monitor` (this creates a file `company_monitor.db` in your n8n data directory).\n* **Query:** The `CREATE TABLE IF NOT EXISTS` query is pre-filled.\n\n**No further action needed**; this node runs automatically.",
        "position": "right"
      },
      "position": [
        1440,
        220
      ],
      "parameters": {
        "query": "CREATE TABLE IF NOT EXISTS processed_mentions (link_hash TEXT PRIMARY KEY, source TEXT, title TEXT, link TEXT, processed_date TEXT)",
        "database": "company_monitor"
      },
      "typeVersion": 1,
      "id": "SQLite-Tabelle-sicherstellen-9"
    },
    {
      "name": "Neue Erwähnungen filtern (Deduplizierung)",
      "type": "n8n-nodes-base.function",
      "notes": {
        "text": "### 5.2. Filter New Mentions (Deduplication)\n\nThis `Function` node generates a unique hash for each mention (based on its link/title) and then checks the `processed_mentions` database to see if it's already been seen. **Only truly new mentions are passed on.**\n\n**Setup:**\n* It uses `crypto` (built-in) for hashing.\n* It executes the 'SQLite: Check If Processed' node for each item.\n\n**Important:** If you change the `SQLite: Check If Processed` node's name, update it here. Also, ensure 'Run Once Per Item' is OFF for this Function node.",
        "position": "right"
      },
      "position": [
        1440,
        380
      ],
      "parameters": {
        "options": {
          "runOncePerItem": false
        },
        "function": "const crypto = require('crypto');\n\nconst newItems = [];\n\nfor (const item of items) {\n  const link = item.json.link || item.json.title; // Use link or title as primary unique identifier\n  const linkHash = crypto.createHash('md5').update(link).digest('hex');\n\n  // Add the hash to the item for later use\n  item.json.linkHash = linkHash;\n\n  // Check if this item (based on linkHash) is already in the database\n  const result = await n8n.getNodeParameter('SQLite: Check If Processed', 'query', { json: item.json }, { execute: true });\n\n  if (result.length === 0) {\n    // Item not found in DB, it's new\n    newItems.push(item);\n  }\n}\n\nreturn newItems;"
      },
      "typeVersion": 1,
      "id": "Neue-Erw-hnungen-filtern-Deduplizierung--10"
    },
    {
      "name": "SQLite: Verarbeitung prüfen",
      "type": "n8n-nodes-base.sqlite",
      "notes": {
        "text": "### (Helper) SQLite: Check If Processed\n\nThis helper `SQLite` node is called by the 'Filter New Mentions' Function node to check if a specific `link_hash` already exists in the database.\n\n**No direct configuration needed**; it's managed by the Function node.",
        "position": "right"
      },
      "position": [
        1700,
        380
      ],
      "parameters": {
        "query": "SELECT link_hash FROM processed_mentions WHERE link_hash = '{{ $json.linkHash }}'",
        "database": "company_monitor"
      },
      "typeVersion": 1,
      "id": "SQLite-Verarbeitung-pr-fen-11"
    },
    {
      "name": "KI: Sentiment analysieren & zusammenfassen",
      "type": "n8n-nodes-base.openAi",
      "notes": {
        "text": "### 6. AI: Analyze Sentiment & Summarize\n\nThis `OpenAI` node processes each *new* mention to determine its sentiment and generate a brief summary.\n\n**Setup:**\n1.  **OpenAI Credential:** Select your OpenAI API Key credential.\n2.  **Model:** `gpt-3.5-turbo` is selected. For more robust analysis, consider `gpt-4o`.\n3.  **Prompt:** The system prompt guides the AI to output sentiment and summary in a structured JSON format, making it easy to parse in the next step.\n\n**Output:** Each item will have a `choices` field containing the AI's JSON output.",
        "position": "right"
      },
      "position": [
        1920,
        300
      ],
      "parameters": {
        "model": "gpt-3.5-turbo",
        "options": {},
        "messages": [
          {
            "role": "system",
            "content": "You are a sentiment analyzer and summarizer for company mentions. For the following text related to '{{ $node[\"Set Company Details\"].json.companyName }}', determine its overall sentiment (Positive, Negative, or Neutral) and provide a concise 2-sentence summary. Output in JSON format:\n\n{\n  \"sentiment\": \"[Positive/Negative/Neutral]\",\n  \"summary\": \"[Concise summary]\"\n}"
          },
          {
            "role": "user",
            "content": "Source: {{ $json.source }}\\nTitle: {{ $json.title }}\\nText: {{ $json.text }}"
          }
        ]
      },
      "credentials": {
        "openAiApi": {
          "id": "YOUR_OPENAI_CREDENTIAL_ID",
          "resolve": false
        }
      },
      "typeVersion": 1,
      "id": "KI-Sentiment-analysieren-zusammenfassen-12"
    },
    {
      "name": "KI-Ergebnisse verarbeiten & kategorisieren",
      "type": "n8n-nodes-base.function",
      "notes": {
        "text": "### 7. Process AI Results & Categorize\n\nThis `Function` node parses the AI's JSON output and categorizes all *new* mentions by their sentiment (Positive, Negative, Neutral). It also handles potential AI parsing errors.\n\n**Output:** A single item containing categorized lists of new mentions (e.g., `positive: [...]`, `negative: [...]`).\n\n**No configuration needed.**",
        "position": "right"
      },
      "position": [
        2160,
        300
      ],
      "parameters": {
        "options": {},
        "function": "const results = {\n  positive: [],\n  neutral: [],\n  negative: [],\n  noAnalysis: []\n};\n\nfor (const item of items) {\n  try {\n    const aiOutput = JSON.parse(item.json.choices[0].message.content);\n    const sentiment = aiOutput.sentiment ? aiOutput.sentiment.toLowerCase() : 'unknown';\n    const summary = aiOutput.summary || 'No summary provided.';\n\n    const data = {\n      source: item.json.source,\n      title: item.json.title,\n      link: item.json.link,\n      summary: summary,\n      publishedAt: item.json.publishedAt,\n      linkHash: item.json.linkHash // Pass the hash for database insertion\n    };\n\n    if (sentiment === 'positive') {\n      results.positive.push(data);\n    } else if (sentiment === 'negative') {\n      results.negative.push(data);\n    } else if (sentiment === 'neutral') {\n      results.neutral.push(data);\n    } else {\n      results.noAnalysis.push(data);\n    }\n  } catch (e) {\n    results.noAnalysis.push({ ...item.json, summary: `AI analysis failed: ${e.message}` });\n  }\n}\n\nreturn [{ json: results }];"
      },
      "typeVersion": 1,
      "id": "KI-Ergebnisse-verarbeiten-kategorisieren-13"
    },
    {
      "name": "SQLite: Verarbeitete Erwähnungen aufzeichnen",
      "type": "n8n-nodes-base.sqlite",
      "notes": {
        "text": "### 8. SQLite: Record Processed Mentions\n\nThis `SQLite` node records all the *newly processed* mentions into your `processed_mentions` database. This is crucial for the deduplication step in future runs.\n\n**Setup:**\n* **Database:** `company_monitor`\n* **Query:** The `INSERT INTO` query is pre-filled, storing the unique hash and other details of the processed mention.\n\n**No further action needed**; it automatically stores the data.",
        "position": "right"
      },
      "position": [
        2400,
        220
      ],
      "parameters": {
        "query": "INSERT INTO processed_mentions (link_hash, source, title, link, processed_date) VALUES ('{{ $json.linkHash }}', '{{ $json.source }}', '{{ $json.title }}', '{{ $json.link }}', '{{ new Date().toISOString() }}')",
        "database": "company_monitor"
      },
      "typeVersion": 1,
      "id": "SQLite-Verarbeitete-Erw-hnungen-aufzeichnen-14"
    },
    {
      "name": "Berichts-E-Mail formatieren",
      "type": "n8n-nodes-base.function",
      "notes": {
        "text": "### 9. Format Report Email\n\nThis `Function` node constructs the final email body. It groups mentions by sentiment and provides summaries and links for each. It also handles the case where no *new* mentions were found.\n\n**Customization:**\n* You can adjust the headings, introduction, or the level of detail for each mention.\n* The email body uses Markdown for formatting (bolding, lists).\n\n**No configuration needed.**",
        "position": "right"
      },
      "position": [
        2400,
        380
      ],
      "parameters": {
        "options": {},
        "function": "const data = items[0].json;\nconst companyName = $node[\"Set Company Details\"].json.companyName;\n\nlet emailBody = `Good morning! Here's your daily online presence report for **${companyName}**.\\n\\n`;\n\n// Helper to format a list of mentions\nconst formatMentions = (mentions, heading) => {\n  if (mentions.length === 0) return '';\n\n  let formatted = `### ${heading} (${mentions.length} mentions)\\n`;\n  mentions.forEach(m => {\n    formatted += `* **[${m.source}]** ${m.title}\\n` +\n                 `    Summary: ${m.summary}\\n` +\n                 `    Link: ${m.link}\\n` +\n                 `    (Published: ${new Date(m.publishedAt).toLocaleDateString()})\\n\\n`;\n  });\n  return formatted;\n};\n\nemailBody += formatMentions(data.positive, 'Positive Mentions');\nemailBody += formatMentions(data.neutral, 'Neutral Mentions');\nemailBody += formatMentions(data.negative, 'Negative Mentions');\nemailBody += formatMentions(data.noAnalysis, 'Mentions with No Analysis / Error');\n\n\n// Check if any new mentions were processed\nif (data.positive.length + data.neutral.length + data.negative.length + data.noAnalysis.length === 0) {\n  emailBody = `Good morning! No *new* online mentions found for **${companyName}** today.`;\n}\n\nreturn [{ json: { emailSubject: `📊 Daily Online Presence Report for ${companyName}`, emailBody: emailBody } }];"
      },
      "typeVersion": 1,
      "id": "Berichts-E-Mail-formatieren-15"
    },
    {
      "name": "Berichts-E-Mail senden",
      "type": "n8n-nodes-base.gmail",
      "notes": {
        "text": "### 10. Send Report Email\n\nThis `Gmail` node sends the compiled daily online presence report to your mailbox.\n\n**Setup:**\n1.  **Gmail Credential:** Select your Gmail API credential.\n2.  **From Email:** Enter your Gmail address (must match the authenticated account).\n3.  **To Email:** **IMPORTANT: Change `YOUR_RECIPIENT_EMAIL@example.com` to your actual email address!**\n4.  **Subject & Text:** These fields pull the formatted subject and body from the previous 'Format Report Email' node.\n\n**Test this node by running the workflow** to ensure you receive the email.",
        "position": "right"
      },
      "position": [
        2640,
        300
      ],
      "parameters": {
        "text": "={{ $json.emailBody }}",
        "options": {},
        "subject": "={{ $json.emailSubject }}",
        "toEmail": "YOUR_RECIPIENT_EMAIL@example.com",
        "fromEmail": "YOUR_GMAIL_EMAIL@gmail.com"
      },
      "credentials": {
        "gmailApi": {
          "id": "YOUR_GMAIL_CREDENTIAL_ID",
          "resolve": false
        }
      },
      "typeVersion": 2,
      "id": "Berichts-E-Mail-senden-16"
    }
  ],
  "pinData": {},
  "version": 1,
  "connections": {
    "Alle-Erw-hnungen-zusammenf-hren-8": {
      "main": [
        [
          {
            "node": "Neue-Erw-hnungen-filtern-Deduplizierung--10",
            "type": "main"
          }
        ]
      ]
    },
    "Berichts-E-Mail-formatieren-15": {
      "main": [
        [
          {
            "node": "Berichts-E-Mail-senden-16",
            "type": "main"
          }
        ]
      ]
    },
    "Search-Reddit-Posts-4": {
      "main": [
        [
          {
            "node": "Prepare-Reddit-for-Merge-5",
            "type": "main"
          }
        ]
      ]
    },
    "Unternehmensdetails-festlegen-1": {
      "main": [
        [
          {
            "node": "Fetch-Google-News-RSS-2",
            "type": "main"
          },
          {
            "node": "Search-Reddit-Posts-4",
            "type": "main"
          },
          {
            "node": "Search-YouTube-Videos-6",
            "type": "main"
          }
        ]
      ]
    },
    "Fetch-Google-News-RSS-2": {
      "main": [
        [
          {
            "node": "News-f-r-Merge-vorbereiten-3",
            "type": "main"
          }
        ]
      ]
    },
    "Search-YouTube-Videos-6": {
      "main": [
        [
          {
            "node": "Prepare-YouTube-for-Merge-7",
            "type": "main"
          }
        ]
      ]
    },
    "News-f-r-Merge-vorbereiten-3": {
      "main": [
        [
          {
            "node": "Alle-Erw-hnungen-zusammenf-hren-8",
            "type": "main"
          }
        ]
      ]
    },
    "Prepare-Reddit-for-Merge-5": {
      "main": [
        [
          {
            "node": "Alle-Erw-hnungen-zusammenf-hren-8",
            "type": "main"
          }
        ]
      ]
    },
    "Prepare-YouTube-for-Merge-7": {
      "main": [
        [
          {
            "node": "Alle-Erw-hnungen-zusammenf-hren-8",
            "type": "main"
          }
        ]
      ]
    },
    "SQLite-Tabelle-sicherstellen-9": {
      "main": [
        [
          {
            "node": "Neue-Erw-hnungen-filtern-Deduplizierung--10",
            "type": "main"
          }
        ]
      ]
    },
    "T-glicher-Morgen-Trigger-9-Uhr--0": {
      "main": [
        [
          {
            "node": "Unternehmensdetails-festlegen-1",
            "type": "main"
          }
        ]
      ]
    },
    "KI-Ergebnisse-verarbeiten-kategorisieren-13": {
      "main": [
        [
          {
            "node": "SQLite-Verarbeitete-Erw-hnungen-aufzeichnen-14",
            "type": "main"
          },
          {
            "node": "Berichts-E-Mail-formatieren-15",
            "type": "main"
          }
        ]
      ]
    },
    "KI-Sentiment-analysieren-zusammenfassen-12": {
      "main": [
        [
          {
            "node": "KI-Ergebnisse-verarbeiten-kategorisieren-13",
            "type": "main"
          }
        ]
      ]
    },
    "SQLite-Verarbeitete-Erw-hnungen-aufzeichnen-14": {
      "main": [
        []
      ]
    },
    "Neue-Erw-hnungen-filtern-Deduplizierung--10": {
      "main": [
        [
          {
            "node": "KI-Sentiment-analysieren-zusammenfassen-12",
            "type": "main"
          }
        ]
      ],
      "output": [
        {
          "type": "item",
          "toIndex": 0,
          "fromIndex": 0,
          "destination": [
            {
              "node": "SQLite: Verarbeitung prüfen",
              "input": "input1"
            }
          ]
        }
      ]
    }
  }
}
Häufig gestellte Fragen

Wie verwende ich diesen Workflow?

Kopieren Sie den obigen JSON-Code, erstellen Sie einen neuen Workflow in Ihrer n8n-Instanz und wählen Sie "Aus JSON importieren". Fügen Sie die Konfiguration ein und passen Sie die Anmeldedaten nach Bedarf an.

Für welche Szenarien ist dieser Workflow geeignet?

Experte - Marktforschung, KI-Zusammenfassung

Ist es kostenpflichtig?

Dieser Workflow ist völlig kostenlos. Beachten Sie jedoch, dass Drittanbieterdienste (wie OpenAI API), die im Workflow verwendet werden, möglicherweise kostenpflichtig sind.

Verwandte Workflows

Tägliche Zusammenfassung positiver Nachrichten mit OpenAI und Gmail
Tägliche positive Nachrichten-Zusammenfassung mit OpenAI und Gmail
If
Cron
Gmail
+
If
Cron
Gmail
9 NodesPiotr Sobolewski
Persönliche Produktivität
KI-gesteuerte Produktrecherche und Preisvergleich - Google Suche und OpenAI
KI-gesteuerte Produktrecherche und Preisvergleich basierend auf Google Suche und OpenAI
Set
Gmail
Open Ai
+
Set
Gmail
Open Ai
8 NodesPiotr Sobolewski
Marktforschung
Automatisches Senden von täglichen E-Mail-Zusammenfassungen mit Gmail und GPT-Zusammenfassung
Automatisiertes Versenden von täglichen E-Mail-Zusammenfassungen mit Gmail und GPT-Zusammenfassung, täglich nachmittags
Cron
Gmail
Open Ai
+
Cron
Gmail
Open Ai
6 NodesPiotr Sobolewski
Persönliche Produktivität
Automatisierter Tracking von Spielangeboten auf mehreren Plattformen
Automatisches Verfolgen von Plattformübergreifenden Spielangeboten mit Deku Deals und Gmail-Erinnerungen
If
Cron
Gmail
+
If
Cron
Gmail
11 NodesPiotr Sobolewski
Persönliche Produktivität
Automatisierter Web-Crawler: Detaillierte Überwachung von Jobs/Produkten mit Telegram-Benachrichtigungen
Automatisierter Web-Crawler: Segmentierte Job-/Produktüberwachung mit Telegram-Benachrichtigungen
If
Cron
Function
+
If
Cron
Function
6 NodesPiotr Sobolewski
Marktforschung
Podcast-Transkripte-Zusammenfassungen und Keywords mit OpenAI und Gmail generieren
OpenAI und Gmail zum Generieren von Podcast-Transkript-Zusammenfassungen und Keywords verwenden
Set
Gmail
Open Ai
+
Set
Gmail
Open Ai
6 NodesPiotr Sobolewski
Content-Erstellung
Workflow-Informationen
Schwierigkeitsgrad
Experte
Anzahl der Nodes17
Kategorie2
Node-Typen10
Schwierigkeitsbeschreibung

Für fortgeschrittene Benutzer, komplexe Workflows mit 16+ Nodes

Autor
Piotr Sobolewski

Piotr Sobolewski

@piotrsobolewski

AI PhD with 7 years experience as a game dev CEO, currently teaching, helping others and building something new.

Externe Links
Auf n8n.io ansehen

Diesen Workflow teilen

Kategorien

Kategorien: 34