Reddit News Automation Professional - Resumen multiplataforma
Este es unMarket Research, AI Summarizationflujo de automatización del dominio deautomatización que contiene 25 nodos.Utiliza principalmente nodos como If, Set, Code, Slack, Discord. Generar resúmenes destacados de Reddit de IA para Telegram, Discord y Slack con Gemini
- •Bot Token de Slack o URL de Webhook
- •Bot Token de Discord o Webhook
- •Bot Token de Telegram
- •Pueden requerirse credenciales de autenticación para la API de destino
- •Clave de API de Google Gemini
Nodos utilizados (25)
{
"id": "MZnTCmdajWSdVOOj",
"meta": {
"instanceId": "e46a00fdf674c6a717d1781ab876c72daac39cdddf1349ff176b416ee5840ee5",
"templateCredsSetupCompleted": true
},
"name": "Reddit News Automation Pro - Multi-Platform Digest",
"tags": [],
"nodes": [
{
"id": "80444483-f96c-4e9b-a190-89220d18c280",
"name": "📖 Documentación Principal (EMPEZAR AQUÍ)",
"type": "n8n-nodes-base.stickyNote",
"position": [
-352,
-496
],
"parameters": {
"color": 4,
"width": 530,
"height": 1442,
"content": "# 🔥 Reddit News Automation Pro - Multi-Platform Digest\n\n## 📌 What This Workflow Does\nAutomatically fetches, curates, and distributes high-quality Reddit news digests from any subreddits you choose. Uses AI to filter out spam, rank by relevance, and format beautiful digests for Telegram, Discord, Slack, or Email.\n\n## 🎯 Who's It For\n- **Content creators** tracking industry trends and viral topics\n- **Marketers** monitoring brand mentions and competitor activity\n- **Researchers** following specific topics and communities\n- **Community managers** staying informed about relevant discussions\n- **Product teams** gathering user feedback and feature requests\n- **Anyone** who wants curated Reddit updates without the noise\n\n## ⚡ Key Features\n- **Multi-subreddit support** - Monitor unlimited subreddits simultaneously\n- **Smart AI curation** - Filters spam, duplicates, and low-quality content\n- **Multi-platform delivery** - Send to Telegram, Discord, Slack, or Email\n- **Flexible scheduling** - Run hourly, daily, weekly, or on-demand\n- **Customizable formatting** - Adapt output to your brand and needs\n- **Deduplication** - Never see the same post twice across digests\n- **Keyword filtering** - Focus on what matters, exclude what doesn't\n- **Upvote threshold** - Only quality content makes it through\n\n## 🛠️ Quick Setup (5 minutes)\n\n### Step 1: Configure Your Settings\n1. Click on the **⚙️ Configuration** node\n2. Edit the `subreddits` field (comma-separated list)\n3. Set `posts_per_subreddit` (default: 25)\n4. Adjust `total_posts_in_digest` (default: 10)\n5. Customize `digest_title` and `focus_keywords`\n6. Set `min_upvotes` threshold (default: 10)\n\n### Step 2: Choose Your Trigger\n- **📅 Schedule Trigger** - For automatic daily/hourly digests\n - Edit the cron expression (default: 9 AM daily)\n - Common patterns:\n - `0 9 * * *` = Daily at 9 AM\n - `0 */6 * * *` = Every 6 hours\n - `0 9 * * 1` = Every Monday at 9 AM\n\n### Step 3: Configure AI Provider\n1. Click on **🤖 AI Content Curator** node\n2. Select your AI model (Gemini, OpenAI, Claude)\n3. Add your API credentials in the credentials section\n4. Test the connection\n\n### Step 4: Set Up Output Channels\nEnable ONE or MORE of these options:\n- **📱 Telegram** - Click node, add bot token, set chat ID\n- **💬 Discord** - Click node, add webhook URL\n- **💼 Slack** - Click node, select channel from dropdown\n\n### Step 5: Test & Deploy\n1. Click **Execute Workflow** button\n2. Check your output channels for the digest\n3. Adjust settings if needed\n4. Click **Active** toggle to enable automation\n\n## 📋 Requirements\n- n8n version 1.0 or higher (self-hosted or cloud)\n- AI provider account (one of):\n - Google Gemini API (recommended - has free tier)\n - OpenAI API (GPT-3.5/4)\n - Anthropic Claude API\n- At least one delivery platform configured:\n - Telegram bot token + chat ID\n - Discord webhook URL\n - Slack OAuth token + channel access\n\n## 🎨 Customization Options\n\n### Change Subreddits\nEdit the `subreddits` field in **⚙️ Configuration**:\n```\nAI_Agents,MachineLearning,programming,startups,technology\n```\n\n### Adjust Time Filters\nChange `time_filter` in **⚙️ Configuration**:\n- `hour` - Last hour's top posts\n- `day` - Today's top posts (default)\n- `week` - This week's top posts\n- `month` - This month's top posts\n- `year` - This year's top posts\n- `all` - All-time top posts\n\n### Modify AI Prompt\nClick **🤖 AI Content Curator** to adjust:\n- Focus areas and keywords\n- Output format and structure\n- Ranking criteria\n- Filtering rules\n\n### Customize Output Format\nEdit **📝 Format for Multiple Platforms**:\n- Change markdown formatting\n- Adjust emoji usage\n- Modify header/footer\n- Add custom branding\n\n## 🔧 Advanced Tips\n\n### Filter by Flair\nAdd this to **🔍 Parse & Clean Posts** code:\n```javascript\nif (post.link_flair_text && !post.link_flair_text.includes('Discussion')) continue;\n```\n\n### Add Email Notifications\n1. Add a \"Send Email\" node after **✅ Check AI Output**\n2. Configure SMTP credentials\n3. Use `{{ $json.formatted_output }}` as email body\n\n### Store to Database\n1. Add a database node (Postgres/MySQL/MongoDB)\n2. Connect after **📊 Aggregate & Deduplicate**\n3. Store posts with timestamps for historical analysis\n\n### Create RSS Feed\n1. Add HTTP Request node with POST method\n2. Send to your RSS generator service\n3. Format data in RSS XML structure\n\n## 🔗 Helpful Resources\n- [Reddit API Documentation](https://www.reddit.com/dev/api/)\n- [n8n Documentation](https://docs.n8n.io/)\n- [Telegram Bot Setup Guide](https://core.telegram.org/bots)\n- [Discord Webhooks Guide](https://support.discord.com/hc/en-us/articles/228383668)\n- [Cron Expression Generator](https://crontab.guru/)\n\n## 🐛 Troubleshooting\n\n**No posts in digest?**\n- Check if subreddits are spelled correctly\n- Lower the `min_upvotes` threshold\n- Increase `posts_per_subreddit`\n- Change `time_filter` to 'week' or 'month'\n\n**AI generation fails?**\n- Verify API credentials are correct\n- Check API usage limits/quota\n- Try a different AI model\n- Review error output from AI node\n\n**Delivery not working?**\n- Verify credentials are properly configured\n- Check if bot/webhook has proper permissions\n- Test with a simple message first\n- Review execution logs for errors\n\n## 💡 Pro Tips\n- Use Schedule trigger for consistent daily digests\n- Set `posts_per_subreddit` higher than `total_posts_in_digest` for better AI curation\n- Focus keywords improve relevance dramatically\n- Start with 3-5 subreddits, expand gradually\n- Monitor AI costs if using paid models\n- Create multiple workflows for different topic areas\n\n## 📊 Example Use Cases\n\n**Tech News Digest**\n- Subreddits: `programming,webdev,technology,startups`\n- Time filter: `day`\n- Focus: `tutorial,release,tool,framework`\n\n**AI Research Updates**\n- Subreddits: `MachineLearning,ArtificialIntelligence,LocalLLaMA`\n- Time filter: `day`\n- Focus: `paper,research,model,breakthrough`\n\n**Gaming News**\n- Subreddits: `gaming,pcgaming,PS5,NintendoSwitch`\n- Time filter: `day`\n- Focus: `release,update,review,announcement`\n\n**Marketing Insights**\n- Subreddits: `marketing,socialmedia,SEO,PPC`\n- Time filter: `week`\n- Focus: `strategy,case study,tool,growth`\n\n---\n\n**Need Help?** Check n8n community forum or documentation for detailed support."
},
"typeVersion": 1
},
{
"id": "a26acf04-a3b6-4ce8-b1f9-79bab59f7998",
"name": "Nota Adhesiva - Sección de Disparador",
"type": "n8n-nodes-base.stickyNote",
"position": [
256,
-496
],
"parameters": {
"color": 5,
"width": 316,
"height": 554,
"content": "## 🎬 STEP 1: TRIGGER\n\n**Choose ONE trigger method:**\n\n### 📅 Option A: Schedule Trigger (Recommended)\n- Runs automatically at set times\n- Perfect for daily/weekly digests\n- Edit cron expression to customize:\n - `0 9 * * *` = Daily at 9 AM\n - `0 */6 * * *` = Every 6 hours\n - `0 9 * * 1` = Weekly on Monday\n\n### ⚡ Option B: Manual Trigger\n- Click \"Test Workflow\" to run on-demand\n- Great for testing and one-off digests\n\n**Current Setting:** Daily at 9 AM UTC\n\n💡 **Tip:** Use https://crontab.guru to create custom schedules"
},
"typeVersion": 1
},
{
"id": "0f1a3665-4e84-45bc-a354-377bfdcedb67",
"name": "Nota Adhesiva - Configuración",
"type": "n8n-nodes-base.stickyNote",
"position": [
576,
-576
],
"parameters": {
"color": 5,
"width": 327,
"height": 617,
"content": "## ⚙️ STEP 2: CONFIGURATION\n\n**Edit these settings to customize your digest:**\n\n### Required Settings:\n- **subreddits** - Comma-separated list\n - Example: `AI_Agents,Python,webdev`\n- **posts_per_subreddit** - How many to fetch (25 recommended)\n- **total_posts_in_digest** - Final count after AI curation (10 recommended)\n\n### Filtering Options:\n- **time_filter** - `hour`, `day`, `week`, `month`, `year`, `all`\n- **min_upvotes** - Quality threshold (10+ recommended)\n- **focus_keywords** - What to prioritize\n- **exclude_keywords** - What to filter out\n\n### Presentation:\n- **digest_title** - Name of your digest\n\n💡 **Tip:** Fetch MORE posts than you need (25) so AI has good options to curate from"
},
"typeVersion": 1
},
{
"id": "840d9ff1-ebe6-4583-863f-caa949e6fcfe",
"name": "Nota Adhesiva - Procesamiento",
"type": "n8n-nodes-base.stickyNote",
"position": [
896,
-448
],
"parameters": {
"color": 5,
"width": 327,
"height": 464,
"content": "## 🔄 STEP 3: DATA PROCESSING\n\n**These nodes prepare subreddit data:**\n\n### 📋 Split Subreddit List\n- Converts comma-separated string to array\n- Example: `\"AI,ML,Tech\"` → `[\"AI\", \"ML\", \"Tech\"]`\n\n### 🔄 Loop Through Subreddits\n- Creates one item per subreddit\n- Enables parallel fetching\n- Iterates using `$itemIndex`\n\n**No configuration needed** - these nodes work automatically based on your Configuration settings."
},
"typeVersion": 1
},
{
"id": "1c4e6f9a-db18-4473-8a2f-f950977f3731",
"name": "Nota Adhesiva - Obtener Datos",
"type": "n8n-nodes-base.stickyNote",
"position": [
1232,
-432
],
"parameters": {
"color": 5,
"width": 327,
"height": 441,
"content": "## 📥 STEP 4: FETCH REDDIT DATA\n\n**How it works:**\n- Uses Reddit's official JSON API (no auth needed)\n- Fetches from: `reddit.com/r/{subreddit}/top.json`\n- Parallel execution for speed\n- 30-second timeout per request\n\n**Data Retrieved:**\n- Post title, URL, and content\n- Author and subreddit\n- Upvotes and comments count\n- Timestamp and flair\n- Domain and media type\n\n**Important Notes:**\n- ✅ No Reddit API key required\n- ✅ Rate limited to ~60 requests/minute\n- ✅ Works for any public subreddit\n- ❌ Cannot access private/banned subreddits\n\n💡 **Tip:** Reddit's JSON API is free and reliable, but respect rate limits"
},
"typeVersion": 1
},
{
"id": "66dc19c8-80eb-4a37-b5dc-752c756d1637",
"name": "Nota Adhesiva - Analizar Publicaciones",
"type": "n8n-nodes-base.stickyNote",
"position": [
1568,
-496
],
"parameters": {
"color": 5,
"width": 327,
"height": 441,
"content": "## 🔍 STEP 5: PARSE & CLEAN\n\n**This code node cleans the raw Reddit data:**\n\n### Filters Applied:\n- ❌ Removes posts below `min_upvotes`\n- ❌ Skips stickied posts (rules/announcements)\n- ❌ Excludes deleted/removed content\n- ❌ Filters out invalid URLs\n\n### Data Cleaned:\n- Trims whitespace and newlines\n- Truncates long selftext to 500 chars\n- Converts timestamps to ISO format\n- Builds clean Reddit URLs\n\n### Output Structure:\n```json\n{\n \"posts\": [...],\n \"subreddit\": \"AI_Agents\",\n \"total_posts\": 18\n}\n```\n\n**Result:** Clean, structured data ready for AI curation"
},
"typeVersion": 1
},
{
"id": "522b356f-35b6-4d30-953e-51f0c47ff539",
"name": "Nota Adhesiva - Agregar",
"type": "n8n-nodes-base.stickyNote",
"position": [
1936,
-512
],
"parameters": {
"color": 5,
"width": 327,
"height": 441,
"content": "## 📊 STEP 6: AGGREGATE & DEDUPLICATE\n\n**Combines all subreddit data:**\n\n### Aggregation:\n- Merges posts from all subreddits\n- Preserves source subreddit info\n- Maintains sorting by score\n\n### Deduplication:\n- Removes duplicate post IDs\n- Handles cross-posted content\n- Keeps highest-scored version\n\n### Smart Filtering:\n- Applies keyword matching\n - ✅ Includes posts with `focus_keywords`\n - ❌ Excludes posts with `exclude_keywords`\n- Case-insensitive matching\n- Searches in title, selftext, and flair\n\n**Output:** Single unified list of unique, relevant posts ready for AI ranking"
},
"typeVersion": 1
},
{
"id": "4ce64fee-4588-4b13-8d6a-104c18811c3f",
"name": "Nota Adhesiva - Curación por IA",
"type": "n8n-nodes-base.stickyNote",
"position": [
2256,
-640
],
"parameters": {
"color": 5,
"width": 358,
"height": 546,
"content": "## 🤖 STEP 7: AI CURATION\n\n**AI Agent ranks and formats posts:**\n\n### What the AI Does:\n1. **Ranks** posts by relevance and quality\n2. **Selects** top N posts (from config)\n3. **Formats** with consistent structure\n4. **Adds** context and insights\n5. **Filters** spam and low-quality content\n\n### AI Provider Options:\n- **Google Gemini** (recommended - free tier)\n- **OpenAI GPT-3.5/4** (paid)\n- **Claude** (paid)\n- **Ollama** (self-hosted, free)\n\n### Setup Instructions:\n1. Click on the **Google Gemini Flash 2.0** node\n2. Click **Create New Credential**\n3. Enter your API key\n4. Test the connection\n\n### Get API Keys:\n- Gemini: https://makersuite.google.com/app/apikey\n- OpenAI: https://platform.openai.com/api-keys\n- Claude: https://console.anthropic.com/\n\n💡 **Tip:** Gemini Flash 2.0 is fast, cheap, and has a generous free tier"
},
"typeVersion": 1
},
{
"id": "cee5bc74-c80e-455d-a69f-1f061fe4ad15",
"name": "Nota Adhesiva - Formatear Salida",
"type": "n8n-nodes-base.stickyNote",
"position": [
2672,
-704
],
"parameters": {
"color": 5,
"width": 327,
"height": 601,
"content": "## 📝 STEP 8: FORMAT OUTPUT\n\n**Formats AI-curated content for delivery:**\n\n### This node:\n- Extracts formatted text from AI response\n- Handles markdown formatting\n- Ensures consistent structure\n- Prepares for multi-platform delivery\n\n### Output Format:\n```\n🤖 [Digest Title]\n📅 [Date]\n\n1️⃣ [Post Title]\n r/[subreddit] • 👍 [upvotes] • 💬 [comments]\n [Summary or key points]\n 🔗 [Link]\n\n2️⃣ [Post Title]\n ...\n```\n\n**Customization:** Edit the AI prompt in the previous node to change format"
},
"typeVersion": 1
},
{
"id": "27d43d74-19ab-4e1e-9161-fb6b5efdaf8a",
"name": "Nota Adhesiva - Validación",
"type": "n8n-nodes-base.stickyNote",
"position": [
2032,
320
],
"parameters": {
"color": 5,
"width": 327,
"height": 446,
"content": "## ✅ STEP 9: VALIDATION\n\n**Checks if AI generated valid output:**\n\n### Validation Logic:\n- Checks if `formatted_output` exists\n- Verifies it's not empty\n- Prevents sending blank messages\n\n### Flow Control:\n- ✅ **True** → Sends to delivery platforms\n- ❌ **False** → Stops workflow (see error handling)\n\n**This prevents failed deliveries and wasted API calls**"
},
"typeVersion": 1
},
{
"id": "401c2f37-1c0f-462f-87e3-e57652c346c0",
"name": "Nota Adhesiva - Plataformas de Entrega",
"type": "n8n-nodes-base.stickyNote",
"position": [
2688,
-64
],
"parameters": {
"color": 5,
"width": 420,
"height": 650,
"content": "## 📤 STEP 10: DELIVERY\n\n**Choose your delivery platform(s):**\n\n### 📱 Telegram\n1. Create bot via [@BotFather](https://t.me/botfather)\n2. Get bot token\n3. Get chat ID (send message to bot, check updates)\n4. Add credentials to node\n5. **Enable node** (currently disabled)\n\n### 💬 Discord\n1. Create webhook in Discord server settings\n2. Copy webhook URL\n3. Add to node credentials\n4. **Enable node** (currently disabled)\n\n### 💼 Slack\n1. Create Slack app\n2. Add OAuth permissions\n3. Install to workspace\n4. Select channel from dropdown\n5. **Enable node** (currently disabled)\n\n### 📧 Email (Add if needed)\n1. Add \"Send Email\" node\n2. Configure SMTP settings\n3. Use same `formatted_output` data\n\n**To Enable a Platform:**\n- Click on the node\n- Click the 3-dot menu\n- Select \"Enable\"\n\n💡 **Tip:** You can enable multiple platforms to send to all at once!"
},
"typeVersion": 1
},
{
"id": "132efcb7-a0c6-4bca-8420-891971640e7f",
"name": "📅 Disparador Programado (Diario 9 AM)",
"type": "n8n-nodes-base.scheduleTrigger",
"position": [
384,
96
],
"parameters": {
"rule": {
"interval": [
{
"field": "cronExpression",
"expression": "0 9 * * *"
}
]
}
},
"typeVersion": 1.2
},
{
"id": "43f60c9c-e735-4e3c-907e-91cca5cd377f",
"name": "⚙️ Configuración",
"type": "n8n-nodes-base.set",
"position": [
592,
96
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "subreddits",
"name": "subreddits",
"type": "string",
"value": "=AI_Agents,generativeAI,ArtificialInteligence,MachineLearning,OpenAI,ChatGPT"
},
{
"id": "posts_per_subreddit",
"name": "posts_per_subreddit",
"type": "number",
"value": 25
},
{
"id": "time_filter",
"name": "time_filter",
"type": "string",
"value": "day"
},
{
"id": "total_posts_in_digest",
"name": "total_posts_in_digest",
"type": "number",
"value": 10
},
{
"id": "digest_title",
"name": "digest_title",
"type": "string",
"value": "🤖 AI Daily Digest"
},
{
"id": "focus_keywords",
"name": "focus_keywords",
"type": "string",
"value": "AI agents, ChatGPT, LLM, machine learning, research, tool, breakthrough"
},
{
"id": "exclude_keywords",
"name": "exclude_keywords",
"type": "string",
"value": "crypto, NFT, political, spam"
},
{
"id": "min_upvotes",
"name": "min_upvotes",
"type": "number",
"value": 10
}
]
}
},
"typeVersion": 3.4
},
{
"id": "ced0b421-1f5b-4b47-93f8-1e69bc13f7da",
"name": "📋 Dividir Lista de Subreddits",
"type": "n8n-nodes-base.set",
"position": [
784,
96
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "subreddit_array",
"name": "subreddit_array",
"type": "array",
"value": "={{ $json.subreddits.split(',').map(s => s.trim()) }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "0efa91cf-0173-4d52-9e79-cd7fe40ae9a5",
"name": "🔄 Iterar por Subreddits",
"type": "n8n-nodes-base.set",
"position": [
992,
96
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "subreddit",
"name": "subreddit",
"type": "string",
"value": "={{ $json.subreddit_array[$itemIndex] }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "75139925-31fb-4999-8ae3-da18c84fea16",
"name": "🌐 Obtener Publicaciones Reddit (JSON API)",
"type": "n8n-nodes-base.httpRequest",
"position": [
1232,
96
],
"parameters": {
"url": "=https://www.reddit.com/r/{{ $json.subreddit }}/top.json?t={{ $('⚙️ Configuration').first().json.time_filter }}&limit={{ $('⚙️ Configuration').first().json.posts_per_subreddit }}",
"options": {
"timeout": 30000,
"response": {
"response": {
"responseFormat": "json"
}
}
},
"sendHeaders": true,
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth",
"headerParameters": {
"parameters": [
{
"name": "User-Agent",
"value": "n8n-reddit-automation/1.0"
}
]
}
},
"credentials": {
"httpHeaderAuth": {
"id": "LDYdFaVICj6n2wNn",
"name": "openrouter nano ban"
}
},
"typeVersion": 4.2
},
{
"id": "351cd99a-2e3a-4be1-92f1-c3a782fd0a2b",
"name": "🔍 Analizar y Limpiar Publicaciones",
"type": "n8n-nodes-base.code",
"position": [
1424,
96
],
"parameters": {
"jsCode": "// Extract and clean Reddit posts from JSON response\nconst data = $input.first().json.data;\nconst configData = $('⚙️ Configuration').first().json;\nconst minUpvotes = configData.min_upvotes || 0;\nconst currentSubreddit = $json.subreddit;\n\nif (!data || !data.children) {\n return [{ json: { posts: [], error: 'No data found' } }];\n}\n\nconst posts = [];\n\nfor (const child of data.children) {\n const post = child.data;\n \n // Skip if below minimum upvotes\n if (post.ups < minUpvotes) continue;\n \n // Skip stickied posts (usually subreddit rules/announcements)\n if (post.stickied) continue;\n \n // Skip removed/deleted posts\n if (post.removed_by_category || post.selftext === '[removed]' || post.selftext === '[deleted]') continue;\n \n // Build clean post object\n const cleanPost = {\n id: post.id,\n subreddit: currentSubreddit,\n title: post.title.replace(/[\\r\\n]+/g, ' ').trim(),\n url: post.url.startsWith('http') ? post.url : `https://www.reddit.com${post.permalink}`,\n permalink: `https://www.reddit.com${post.permalink}`,\n author: post.author,\n upvotes: post.ups,\n num_comments: post.num_comments,\n created_utc: post.created_utc,\n created_date: new Date(post.created_utc * 1000).toISOString(),\n selftext: post.selftext ? post.selftext.substring(0, 500) : '',\n is_self: post.is_self,\n domain: post.domain,\n score: post.score,\n flair_text: post.link_flair_text || ''\n };\n \n posts.push(cleanPost);\n}\n\n// Sort by score (upvotes)\nposts.sort((a, b) => b.score - a.score);\n\nreturn [{ json: { posts, subreddit: currentSubreddit, total_posts: posts.length } }];\n"
},
"typeVersion": 2
},
{
"id": "c2733e06-c5f5-48e0-8135-656dd108423b",
"name": "📊 Agregar y Eliminar Duplicados",
"type": "n8n-nodes-base.code",
"position": [
1616,
96
],
"parameters": {
"jsCode": "// Aggregate posts from all subreddits and apply smart filtering\nconst allItems = $input.all();\nconst configData = $('⚙️ Configuration').first().json;\n\n// Collect all posts from all subreddits\nconst allPosts = [];\nfor (const item of allItems) {\n if (item.json.posts && Array.isArray(item.json.posts)) {\n allPosts.push(...item.json.posts);\n }\n}\n\n// Deduplicate by post ID\nconst uniquePosts = [];\nconst seenIds = new Set();\n\nfor (const post of allPosts) {\n if (!seenIds.has(post.id)) {\n seenIds.add(post.id);\n uniquePosts.push(post);\n }\n}\n\n// Apply keyword filtering if configured\nconst focusKeywords = configData.focus_keywords ? \n configData.focus_keywords.toLowerCase().split(',').map(k => k.trim()) : [];\nconst excludeKeywords = configData.exclude_keywords ? \n configData.exclude_keywords.toLowerCase().split(',').map(k => k.trim()) : [];\n\nconst filteredPosts = uniquePosts.filter(post => {\n const searchText = `${post.title} ${post.selftext} ${post.flair_text}`.toLowerCase();\n \n // Check exclude keywords first\n if (excludeKeywords.length > 0) {\n for (const keyword of excludeKeywords) {\n if (searchText.includes(keyword)) {\n return false; // Exclude this post\n }\n }\n }\n \n // If focus keywords are set, post must contain at least one\n if (focusKeywords.length > 0) {\n let hasKeyword = false;\n for (const keyword of focusKeywords) {\n if (searchText.includes(keyword)) {\n hasKeyword = true;\n break;\n }\n }\n return hasKeyword;\n }\n \n return true; // No filters or passed all filters\n});\n\n// Sort by score\nfilteredPosts.sort((a, b) => b.score - a.score);\n\nreturn [{ \n json: { \n all_posts: filteredPosts,\n total_subreddits: allItems.length,\n total_posts_fetched: allPosts.length,\n total_unique_posts: uniquePosts.length,\n total_filtered_posts: filteredPosts.length,\n config: configData\n } \n}];\n"
},
"typeVersion": 2
},
{
"id": "7d167036-16e2-48cf-ace9-7b63b40dc69d",
"name": "🤖 Curador de Contenido con IA",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
1808,
96
],
"parameters": {
"text": "=You are an expert content curator for Reddit news digests. Your job is to analyze, rank, and format Reddit posts into a high-quality digest.\n\n## Context\n- Digest Title: {{ $json.config.digest_title }}\n- Target Audience: Tech-savvy users interested in {{ $json.config.focus_keywords }}\n- Total Posts Available: {{ $json.total_filtered_posts }}\n- Posts Needed: {{ $json.config.total_posts_in_digest }}\n\n## Your Task\n1. Review all provided Reddit posts\n2. Rank them by:\n - Relevance to focus keywords: {{ $json.config.focus_keywords }}\n - Quality and credibility of content\n - Engagement (upvotes, comments)\n - Uniqueness and value\n - Recency and timeliness\n\n3. Select the TOP {{ $json.config.total_posts_in_digest }} most valuable posts\n\n4. Format them into a beautiful, scannable digest with this structure:\n\n```\n{{ $json.config.digest_title }}\n📅 {{ $now.format('MMMM DD, YYYY') }}\n━━━━━━━━━━━━━━━━━━━━\n\n[For each post, use this format:]\n\n1️⃣ **[Post Title]**\n 📍 r/[subreddit] • 👍 [upvotes] • 💬 [comments]\n \n [Write 1-2 sentences explaining why this post is valuable and what the key insight is]\n \n 🔗 [Reddit URL]\n\n2️⃣ **[Next Post Title]**\n ...\n\n━━━━━━━━━━━━━━━━━━━━\n💡 Curated by AI • {{ $json.total_filtered_posts }} posts analyzed\n```\n\n## Guidelines\n- Be concise but informative\n- Use emojis sparingly for visual hierarchy\n- Focus on WHY each post matters\n- Avoid jargon; explain technical terms\n- Keep summaries actionable\n- Number posts 1️⃣ through 🔟\n\n## Reddit Posts Data\n{{ JSON.stringify($json.all_posts, null, 2) }}\n\n## Output\nProvide ONLY the formatted digest text, ready to send. No meta-commentary.",
"options": {
"systemMessage": "You are a professional content curator. You create concise, valuable, and engaging content digests. You understand what makes content worth reading and can identify signal from noise."
},
"promptType": "define",
"hasOutputParser": true
},
"typeVersion": 1.7
},
{
"id": "5610b7d3-4489-4314-8d57-40ac7a907c85",
"name": "Google Gemini Flash 2.0",
"type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"position": [
1808,
304
],
"parameters": {
"options": {}
},
"typeVersion": 1
},
{
"id": "0e8207b0-4330-4e03-a155-fea83a88e8cd",
"name": "📝 Formatear para Múltiples Plataformas",
"type": "n8n-nodes-base.code",
"position": [
2064,
96
],
"parameters": {
"jsCode": "// Extract formatted output from AI response\nconst aiResponse = $input.first().json;\n\n// The AI agent returns output in different formats depending on the response\nlet formattedOutput = '';\n\nif (aiResponse.output) {\n formattedOutput = aiResponse.output;\n} else if (aiResponse.text) {\n formattedOutput = aiResponse.text;\n} else if (typeof aiResponse === 'string') {\n formattedOutput = aiResponse;\n} else {\n // Fallback: convert entire response to string\n formattedOutput = JSON.stringify(aiResponse, null, 2);\n}\n\n// Clean up any extra whitespace\nformattedOutput = formattedOutput.trim();\n\nreturn [{ \n json: { \n formatted_output: formattedOutput,\n timestamp: new Date().toISOString(),\n digest_title: $('⚙️ Configuration').first().json.digest_title\n } \n}];\n"
},
"typeVersion": 2
},
{
"id": "fd16865d-a1f5-4085-89c8-e1193531bfcf",
"name": "✅ Verificar Salida de la IA",
"type": "n8n-nodes-base.if",
"position": [
2208,
96
],
"parameters": {
"options": {},
"conditions": {
"options": {
"leftValue": "",
"caseSensitive": false
},
"combinator": "and",
"conditions": [
{
"id": "check-output",
"operator": {
"type": "string",
"operation": "notEmpty"
},
"leftValue": "={{ $json.formatted_output }}",
"rightValue": ""
}
]
}
},
"typeVersion": 2.1
},
{
"id": "ff981252-16c1-4307-810d-048106cb751e",
"name": "📱 Enviar a Telegram",
"type": "n8n-nodes-base.telegram",
"position": [
2464,
-96
],
"webhookId": "503f1790-45aa-4f10-9565-63e0025a4ca7",
"parameters": {
"text": "={{ $json.formatted_output }}",
"chatId": "={{ $('⚙️ Configuration').first().json.telegram_chat_id || 'YOUR_CHAT_ID' }}",
"additionalFields": {
"parse_mode": "Markdown"
}
},
"typeVersion": 1.2
},
{
"id": "5cc13e05-a01b-4bb5-b0d1-ed8e935c59f4",
"name": "💬 Enviar a Discord",
"type": "n8n-nodes-base.discord",
"position": [
2464,
96
],
"webhookId": "f2cac624-a133-4201-a7eb-39328def4185",
"parameters": {
"content": "={{ $json.formatted_output }}",
"options": {},
"authentication": "webhook"
},
"typeVersion": 2
},
{
"id": "20df81d2-629c-46ce-b367-b1a21d32fc98",
"name": "💼 Enviar a Slack",
"type": "n8n-nodes-base.slack",
"position": [
2480,
320
],
"webhookId": "088380b0-220c-4bfc-9287-b6ab72b62840",
"parameters": {
"text": "={{ $json.formatted_output }}",
"select": "channel",
"channelId": {
"__rl": true,
"mode": "list",
"value": "YOUR_CHANNEL_ID"
},
"otherOptions": {}
},
"typeVersion": 2.2
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "5e18ee82-46de-49aa-9cdf-88614eb2d4bc",
"connections": {
"fd16865d-a1f5-4085-89c8-e1193531bfcf": {
"main": [
[
{
"node": "ff981252-16c1-4307-810d-048106cb751e",
"type": "main",
"index": 0
},
{
"node": "5cc13e05-a01b-4bb5-b0d1-ed8e935c59f4",
"type": "main",
"index": 0
},
{
"node": "20df81d2-629c-46ce-b367-b1a21d32fc98",
"type": "main",
"index": 0
}
]
]
},
"43f60c9c-e735-4e3c-907e-91cca5cd377f": {
"main": [
[
{
"node": "ced0b421-1f5b-4b47-93f8-1e69bc13f7da",
"type": "main",
"index": 0
}
]
]
},
"5610b7d3-4489-4314-8d57-40ac7a907c85": {
"ai_languageModel": [
[
{
"node": "7d167036-16e2-48cf-ace9-7b63b40dc69d",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"7d167036-16e2-48cf-ace9-7b63b40dc69d": {
"main": [
[
{
"node": "0e8207b0-4330-4e03-a155-fea83a88e8cd",
"type": "main",
"index": 0
}
]
]
},
"351cd99a-2e3a-4be1-92f1-c3a782fd0a2b": {
"main": [
[
{
"node": "c2733e06-c5f5-48e0-8135-656dd108423b",
"type": "main",
"index": 0
}
]
]
},
"ced0b421-1f5b-4b47-93f8-1e69bc13f7da": {
"main": [
[
{
"node": "0efa91cf-0173-4d52-9e79-cd7fe40ae9a5",
"type": "main",
"index": 0
}
]
]
},
"c2733e06-c5f5-48e0-8135-656dd108423b": {
"main": [
[
{
"node": "7d167036-16e2-48cf-ace9-7b63b40dc69d",
"type": "main",
"index": 0
}
]
]
},
"0efa91cf-0173-4d52-9e79-cd7fe40ae9a5": {
"main": [
[
{
"node": "75139925-31fb-4999-8ae3-da18c84fea16",
"type": "main",
"index": 0
}
]
]
},
"75139925-31fb-4999-8ae3-da18c84fea16": {
"main": [
[
{
"node": "351cd99a-2e3a-4be1-92f1-c3a782fd0a2b",
"type": "main",
"index": 0
}
]
]
},
"132efcb7-a0c6-4bca-8420-891971640e7f": {
"main": [
[
{
"node": "43f60c9c-e735-4e3c-907e-91cca5cd377f",
"type": "main",
"index": 0
}
]
]
},
"0e8207b0-4330-4e03-a155-fea83a88e8cd": {
"main": [
[
{
"node": "fd16865d-a1f5-4085-89c8-e1193531bfcf",
"type": "main",
"index": 0
}
]
]
}
}
}¿Cómo usar este flujo de trabajo?
Copie el código de configuración JSON de arriba, cree un nuevo flujo de trabajo en su instancia de n8n y seleccione "Importar desde JSON", pegue la configuración y luego modifique la configuración de credenciales según sea necesario.
¿En qué escenarios es adecuado este flujo de trabajo?
Avanzado - Investigación de mercado, Resumen de IA
¿Es de pago?
Este flujo de trabajo es completamente gratuito, puede importarlo y usarlo directamente. Sin embargo, tenga en cuenta que los servicios de terceros utilizados en el flujo de trabajo (como la API de OpenAI) pueden requerir un pago por su cuenta.
Flujos de trabajo relacionados recomendados
Hemanth Arety
@hemantharetyCompartir este flujo de trabajo