使用 Sora 2/Pro 和 GPT-5 增强从文本或图像生成 AI 视频
高级
这是一个Content Creation, Multimodal AI领域的自动化工作流,包含 22 个节点。主要使用 Form, Wait, Switch, FormTrigger, HttpRequest 等节点。 使用 Sora 2/Pro 和 GPT-5 增强从文本或图像生成 AI 视频
前置要求
- •可能需要目标 API 的认证凭证
- •OpenAI API Key
工作流预览
可视化展示节点连接关系,支持缩放和平移
导出工作流
复制以下 JSON 配置到 n8n 导入,即可使用此工作流
{
"meta": {
"instanceId": "3d7eb9567ae690bf8c9bba1cb43396e6e40c18e15eb5889cf9673ed1713da6db",
"templateCredsSetupCompleted": true
},
"nodes": [
{
"id": "a85d1e91-bf05-4a5a-9759-15de2b1b50e8",
"name": "注意:模式路由器",
"type": "n8n-nodes-base.stickyNote",
"position": [
5280,
2640
],
"parameters": {
"color": 3,
"width": 332,
"height": 192,
"content": "## 🔀 输入模式路由器"
},
"typeVersion": 1
},
{
"id": "eda124e2-e1aa-4064-96a7-c31cbde19521",
"name": "注意:图像上传",
"type": "n8n-nodes-base.stickyNote",
"position": [
5744,
2080
],
"parameters": {
"color": 6,
"width": 332,
"height": 192,
"content": "## 🖼️ 临时图像上传"
},
"typeVersion": 1
},
{
"id": "1c76b2c1-9778-4f10-a078-6ce8eba97641",
"name": "注意:提示优化器",
"type": "n8n-nodes-base.stickyNote",
"position": [
5488,
2928
],
"parameters": {
"color": 5,
"width": 332,
"height": 192,
"content": "## 🤖 提示优化器"
},
"typeVersion": 1
},
{
"id": "707ea9aa-81eb-47d1-b000-eb3238c70b94",
"name": "注意:JSON 解析器",
"type": "n8n-nodes-base.stickyNote",
"position": [
5920,
2944
],
"parameters": {
"color": 3,
"width": 332,
"height": 192,
"content": "## 🔍 JSON 输出解析器"
},
"typeVersion": 1
},
{
"id": "c474d947-6610-4731-bc3b-af7c09f926be",
"name": "注意:文本转视频",
"type": "n8n-nodes-base.stickyNote",
"position": [
6288,
2752
],
"parameters": {
"color": 5,
"width": 332,
"height": 192,
"content": "## 🎥 文本转视频调用"
},
"typeVersion": 1
},
{
"id": "607e987b-df5f-4d7a-8d49-b79440c62962",
"name": "注意:图像转视频",
"type": "n8n-nodes-base.stickyNote",
"position": [
6240,
2144
],
"parameters": {
"color": 5,
"width": 332,
"height": 192,
"content": "## 🖼️ 图像转视频调用"
},
"typeVersion": 1
},
{
"id": "f50b63a0-dee7-4783-bcff-8b5d5b75949b",
"name": "视频输入表单",
"type": "n8n-nodes-base.formTrigger",
"position": [
5392,
2448
],
"webhookId": "45e6e98c-ff89-46c5-b143-c5d4c9dab16f",
"parameters": {
"options": {
"appendAttribution": false
},
"formTitle": "Create a Video using Sora 2",
"formFields": {
"values": [
{
"fieldLabel": "Prompt",
"requiredField": true
},
{
"fieldType": "dropdown",
"fieldLabel": "Aspect Ratio",
"fieldOptions": {
"values": [
{
"option": "9:16 (vertical)"
},
{
"option": "16:9 (Horizontal)"
}
]
},
"requiredField": true
},
{
"fieldType": "checkbox",
"fieldLabel": "Model",
"fieldOptions": {
"values": [
{
"option": "sora-2"
},
{
"option": "sora-2-pro"
}
]
},
"requiredField": true,
"limitSelection": "exact"
},
{
"fieldType": "checkbox",
"fieldLabel": "Lenght",
"fieldOptions": {
"values": [
{
"option": "4s"
},
{
"option": "8s"
},
{
"option": "12s"
}
]
},
"requiredField": true,
"limitSelection": "exact"
},
{
"fieldType": "file",
"fieldLabel": "Image",
"multipleFiles": false,
"acceptFileTypes": ".jpg,.jpeg,.png"
}
]
}
},
"typeVersion": 2.3
},
{
"id": "0c7ab4d0-5a34-41c1-9ea8-5a145056cee6",
"name": "提示优化器",
"type": "@n8n/n8n-nodes-langchain.chainLlm",
"position": [
5840,
2592
],
"parameters": {
"text": "=User Query: {{ $json.Prompt }}\n\nApect Ratio: {{ $json['Aspect Ratio'] }}\n\nLenght: {{ $json.Lenght[0] }}",
"batching": {},
"messages": {
"messageValues": [
{
"message": "=You are an expert AI video prompt engineer specializing in OpenAI's Sora 2 video generation model. Your role is to transform user input into professionally structured, cinematic prompts optimized for high-quality video generation.\n\n## Core Principles\n\n1. **Brevity for Reliability**: Shorter clips (4s) follow instructions more reliably than longer ones. Recommend 4s duration unless user specifically needs longer content.\n\n2. **Specificity Over Vagueness**: Replace abstract concepts with concrete, visual details. Transform \"beautiful street\" into \"wet asphalt, zebra crosswalk, neon signs reflecting in puddles.\"\n\n3. **One Beat Per Shot**: Each shot should contain ONE clear camera movement and ONE clear subject action. Avoid cramming multiple complex actions into a single clip.\n\n4. **Cinematic Thinking**: Treat prompts as storyboard descriptions or cinematographer briefs, not casual requests.\n\n## Prompt Structure Framework\n\nOrganize enhanced prompts using this hierarchy:\n\n### 1. Style & Format (Optional but Powerful)\n- Establish overall aesthetic early: \"1970s film,\" \"IMAX aerial,\" \"handheld documentary\"\n- Film stock references: \"35mm film,\" \"16mm with grain,\" \"digital capture\"\n- Color treatment: \"Kodak warm grade,\" \"teal and orange palette,\" \"desaturated noir\"\n\n### 2. Scene Description\n- Setting and environment with specific visual details\n- Character descriptions (clothing, age, demeanor)\n- Atmospheric elements (weather, time of day, lighting quality)\n- Props and set dressing that matter to the shot\n\n### 3. Cinematography\n**Camera shot**: Specify framing and angle\n- Examples: \"wide establishing shot, eye level,\" \"medium close-up, slight low angle,\" \"aerial wide shot, downward tilt\"\n\n**Lens/DOF**: When detail matters\n- Examples: \"35mm lens, shallow depth of field,\" \"50mm with background softness,\" \"wide angle for environmental context\"\n\n**Camera movement**: Keep it simple and precise\n- Examples: \"slow push-in,\" \"dolly left to right,\" \"static handheld,\" \"crane up revealing skyline\"\n\n**Mood**: Emotional tone\n- Examples: \"tense and cinematic,\" \"warm and nostalgic,\" \"playful suspense\"\n\n### 4. Lighting & Palette\nDescribe light quality and color anchors:\n- Light quality: \"soft window light,\" \"hard single source,\" \"diffused overhead\"\n- Direction: \"from camera left,\" \"backlit,\" \"rim lighting\"\n- Color anchors: Name 3-5 specific colors for palette consistency\n- Examples: \"warm key from overhead, cool rim from window; palette: amber, cream, teal\"\n\n### 5. Actions (Time-Based Beats)\nBreak down motion into countable beats:\n- Use specific verbs and counts: \"takes four steps,\" \"pauses for two seconds,\" \"turns and catches\"\n- Avoid: \"walks around\" → Use: \"takes three steps forward, pauses, looks left\"\n- Keep actions achievable within the duration\n\n### 6. Dialogue (If Applicable)\nFormat dialogue clearly:\n- Place in dedicated block with speaker labels\n- Keep lines short and natural (4s = 1-2 exchanges, 8s = 3-4 exchanges)\n- Example format:\n ```\n Dialogue:\n - Character A: \"Short, natural line.\"\n - Character B: \"Response that fits timing.\"\n ```\n\n### 7. Audio/Sound (Optional)\nSuggest diegetic sounds to establish rhythm:\n- Examples: \"distant traffic hum,\" \"coffee machine hiss,\" \"paper rustle\"\n- Note: This is for pacing cues, not full soundtracks\n\n## Enhancement Guidelines\n\n### What to ADD:\n- Concrete visual details (colors, textures, specific objects)\n- Professional cinematography terms (shot types, camera movements)\n- Lighting direction and quality\n- Precise action beats with timing\n- Style references that set aesthetic tone\n- Specific color palette (3-5 colors)\n\n### What to REPLACE:\n- \"Beautiful\" → Specific visual qualities\n- \"Moves\" → Precise action with counts\n- \"Nice lighting\" → Light source, direction, quality\n- \"Cinematic\" → Actual film/lens specifications\n- \"Interesting angle\" → Specific shot type and framing\n\n### What to AVOID:\n- Multiple complex actions in one shot\n- Vague descriptors without visual specifics\n- Requesting duration/resolution in prose (these are API parameters)\n- Overcrowding shots with too many elements\n- Abstract emotions without visual manifestations\n\n## Duration Recommendations\n\nBased on user intent:\n- **4 seconds**: Default recommendation. Most reliable for instruction following. Best for single clear action.\n- **8 seconds**: When user needs slightly more development. Warn that this may be less reliable; suggest stitching two 4s clips instead.\n- **12 seconds**: Only when explicitly requested. Strongly recommend breaking into multiple 4s shots for better control.\n\n## Aspect Ratio Selection\n\n- **16:9**: Landscape, traditional video, cinematic scenes, wide vistas, desktop viewing\n- **9:16**: Portrait, social media (TikTok, Instagram Stories, Reels), mobile-first content, vertical stories\n\n## Transformation Process\n\n1. **Analyze** user input for core intent\n2. **Identify** missing cinematic elements (camera, lighting, specific actions)\n3. **Expand** vague descriptions into concrete visuals\n4. **Structure** using the framework above\n5. **Optimize** for the chosen duration\n6. **Balance** detail with creative freedom based on user needs\n\n## Examples of Weak → Strong Transformations\n\n**Weak**: \"A person walking down a street at night\"\n**Strong**: \"Style: Handheld 35mm with natural grain. A woman in a red coat takes five measured steps down a wet cobblestone street. Amber streetlights create pools of warm light; cool shadows between them. Camera: medium tracking shot, following from behind at shoulder level. Mood: solitary, urban noir. Lighting: practical streetlights only; reflections in puddles.\"\n\n**Weak**: \"Make it look cinematic\"\n**Strong**: \"Camera: wide shot, slow dolly-in. Lens: 40mm spherical with shallow DOF. Lighting: golden hour natural key from camera left, edge light on subject. Palette: warm amber, deep teal, cream. Mood: nostalgic, intimate.\"\n\n## Response Format\n\nAlways output your enhanced prompt as a JSON object with exactly three fields:\n- \"prompt\": The fully enhanced, professionally structured prompt (50-4000 characters)\n- \"aspect_ratio\": Either \"16:9\" or \"9:16\"\n- \"duration\": Either 4, 8, or 12 (integer, in seconds)\n\nAim for 60-150 words for standard prompts, more for complex cinematic shots requiring detailed specifications. Include professional cinematographic language while maintaining clarity."
}
]
},
"promptType": "define",
"hasOutputParser": true
},
"typeVersion": 1.7
},
{
"id": "2cfdf665-263a-441a-abef-07136a766458",
"name": "JSON 输出解析器",
"type": "@n8n/n8n-nodes-langchain.outputParserStructured",
"position": [
5984,
2768
],
"parameters": {
"schemaType": "manual",
"inputSchema": "{\n \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n \"title\": \"Sora2VideoGenerationRequest\",\n \"description\": \"Simplified schema for Sora 2 video generation with enhanced prompts\",\n \"type\": \"object\",\n \"required\": [\"prompt\", \"aspect_ratio\", \"duration\"],\n \"properties\": {\n \"prompt\": {\n \"type\": \"string\",\n \"description\": \"The fully enhanced, professionally structured prompt optimized for Sora 2 video generation with cinematography details, specific actions, lighting, and visual specifics\",\n \"minLength\": 50,\n \"maxLength\": 4000\n },\n \"aspect_ratio\": {\n \"type\": \"string\",\n \"enum\": [\"16:9\", \"9:16\"],\n \"description\": \"Video aspect ratio. 16:9 for landscape/cinematic, 9:16 for portrait/social media\"\n },\n \"duration\": {\n \"type\": \"integer\",\n \"enum\": [4, 8, 12],\n \"description\": \"Video duration in seconds. 4s is most reliable, 8s and 12s may have reduced instruction-following accuracy\"\n }\n }\n}"
},
"typeVersion": 1.3
},
{
"id": "e857b653-7894-43d8-8598-8d6998076a65",
"name": "临时图像上传",
"type": "n8n-nodes-base.httpRequest",
"position": [
5888,
2320
],
"parameters": {
"url": "=https://tmpfiles.org/api/v1/upload",
"method": "POST",
"options": {},
"sendBody": true,
"contentType": "multipart-form-data",
"bodyParameters": {
"parameters": [
{
"name": "file",
"parameterType": "formBinaryData",
"inputDataFieldName": "Image"
}
]
}
},
"typeVersion": 4.2
},
{
"id": "e8760386-b433-473d-9099-bb2f03c7c3aa",
"name": "文本转视频调用",
"type": "n8n-nodes-base.httpRequest",
"position": [
6144,
2592
],
"parameters": {
"url": "=https://queue.fal.run/fal-ai/sora-2/text-to-video{{ $('Video Input Form').item.json.Model[0] === 'sora-2-pro' ? '/pro' : '' }}",
"method": "POST",
"options": {},
"jsonBody": "={\n \"prompt\": \"{{ JSON.stringify($('Prompt Refiner').item.json.output.prompt.replaceAll(/\\\\n/g, '')).slice(1, -1) }}\",\n \"resolution\": \"720p\",\n \"aspect_ratio\": \"{{ $('Prompt Refiner').item.json.output.aspect_ratio }}\",\n \"duration\": {{ $('Prompt Refiner').item.json.output.duration }}\n}",
"sendBody": true,
"specifyBody": "json",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth"
},
"credentials": {
"httpHeaderAuth": {
"id": "KvEry5tdcrr9vqqw",
"name": "Vapi - Zephoric"
}
},
"typeVersion": 4.2
},
{
"id": "2d84b59f-4c4e-4fac-968a-81eb6482c04e",
"name": "输入模式路由器",
"type": "n8n-nodes-base.switch",
"position": [
5600,
2448
],
"parameters": {
"rules": {
"values": [
{
"outputKey": "Image to Video",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "dfdd231a-d2f6-4973-a068-ac13f2bbd506",
"operator": {
"type": "string",
"operation": "notEmpty",
"singleValue": true
},
"leftValue": "={{ $json.Image.filename }}",
"rightValue": ""
}
]
},
"renameOutput": true
},
{
"outputKey": "Text to Video ",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "68ac0648-f33e-4394-805d-a8a9b788f1df",
"operator": {
"type": "string",
"operation": "empty",
"singleValue": true
},
"leftValue": "={{ $json.Image.filename }}",
"rightValue": ""
}
]
},
"renameOutput": true
}
]
},
"options": {}
},
"typeVersion": 3.3
},
{
"id": "eef5e174-4d2a-450f-8bac-dd343865ad6b",
"name": "图像转视频调用",
"type": "n8n-nodes-base.httpRequest",
"position": [
6064,
2320
],
"parameters": {
"url": "=https://queue.fal.run/fal-ai/sora-2/image-to-video{{ $('Video Input Form').item.json.Model[0] === 'sora-2-pro' ? '/pro' : '' }}",
"method": "POST",
"options": {},
"jsonBody": "={\n \"prompt\": \"{{ JSON.stringify($('Video Input Form').item.json.Prompt.replaceAll(/\\\\n/g, '')).slice(1, -1) }}\",\n \"resolution\": \"auto\",\n \"aspect_ratio\": \"{{ $('Video Input Form').item.json['Aspect Ratio'].replaceAll(' (vertical)', '').replaceAll(' (Horizontal)', '') }}\",\n \"duration\": {{ $('Video Input Form').item.json.Lenght[0].replaceAll('s', '') }},\n \"image_url\": \"{{ $json.data.url.replaceAll('.org/', '.org/dl/') }}\"\n}",
"sendBody": true,
"specifyBody": "json",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth"
},
"credentials": {
"httpHeaderAuth": {
"id": "KvEry5tdcrr9vqqw",
"name": "Vapi - Zephoric"
}
},
"typeVersion": 4.2
},
{
"id": "c07d5074-c4b7-4f43-a0d7-f26e2cd37137",
"name": "等待 60 秒",
"type": "n8n-nodes-base.wait",
"position": [
6336,
2416
],
"webhookId": "caad73e3-58d8-4fbd-a3e9-c42424f2d1ee",
"parameters": {
"amount": 60
},
"typeVersion": 1.1
},
{
"id": "557e1785-4e6c-4f2e-a9a4-3098b1d638cb",
"name": "状态检查",
"type": "n8n-nodes-base.httpRequest",
"position": [
6544,
2416
],
"parameters": {
"url": "=https://queue.fal.run/fal-ai/sora-2/requests/{{ $json.request_id }}/status",
"options": {},
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth"
},
"credentials": {
"httpHeaderAuth": {
"id": "KvEry5tdcrr9vqqw",
"name": "Vapi - Zephoric"
}
},
"typeVersion": 4.2
},
{
"id": "320c2ee0-767d-4106-a605-b03bd1a80631",
"name": "状态路由器",
"type": "n8n-nodes-base.switch",
"position": [
6752,
2416
],
"parameters": {
"rules": {
"values": [
{
"outputKey": "Done",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "d8b8dbdc-1ad9-4ab9-8b2d-e76fd5db0899",
"operator": {
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $json.status }}",
"rightValue": "COMPLETED"
}
]
},
"renameOutput": true
},
{
"outputKey": "Progress",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "9c10982c-5f8c-4eec-9b8a-f4b42e99ecf9",
"operator": {
"type": "string",
"operation": "notEquals"
},
"leftValue": "={{ $json.status }}",
"rightValue": "COMPLETED"
}
]
},
"renameOutput": true
}
]
},
"options": {}
},
"typeVersion": 3.2
},
{
"id": "cfa44af4-e910-40cf-b581-1acee757871d",
"name": "检索视频",
"type": "n8n-nodes-base.httpRequest",
"position": [
6944,
2400
],
"parameters": {
"url": "=https://queue.fal.run/fal-ai/sora-2/requests/{{ $json.request_id }}",
"options": {},
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth"
},
"credentials": {
"httpHeaderAuth": {
"id": "KvEry5tdcrr9vqqw",
"name": "Vapi - Zephoric"
}
},
"typeVersion": 4.2
},
{
"id": "6ce27f08-cae8-4a9b-bac3-2e01fa31c864",
"name": "视频重定向",
"type": "n8n-nodes-base.form",
"position": [
7152,
2400
],
"webhookId": "7b9192c7-c2ff-47b1-b893-bfbea42dd268",
"parameters": {
"options": {},
"operation": "completion",
"redirectUrl": "={{ $json.video.url }}",
"respondWith": "redirect"
},
"typeVersion": 2.3
},
{
"id": "d110e220-883b-4e3f-bbda-cd195ddf23dd",
"name": "概述说明8",
"type": "n8n-nodes-base.stickyNote",
"position": [
4528,
1984
],
"parameters": {
"color": 4,
"width": 696,
"height": 1184,
"content": "# 🎬 通过 Fal 使用 GPT-5 优化的 Sora 2 视频生成器"
},
"typeVersion": 1
},
{
"id": "6dad83d5-6cb1-4dca-90a1-c12048b5b544",
"name": "注意:表单触发器1",
"type": "n8n-nodes-base.stickyNote",
"position": [
5248,
2176
],
"parameters": {
"color": 6,
"width": 332,
"height": 192,
"content": "## 📝 视频输入表单"
},
"typeVersion": 1
},
{
"id": "6e91faf7-a3c5-48ce-94d6-a993ea8e5d8e",
"name": "注意:轮询循环1",
"type": "n8n-nodes-base.stickyNote",
"position": [
6688,
2640
],
"parameters": {
"color": 2,
"width": 332,
"height": 192,
"content": "## ⏳ 状态轮询循环"
},
"typeVersion": 1
},
{
"id": "ad9eadd7-1e7f-4922-8ca7-6a673812ffd0",
"name": "优化器模型",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [
5840,
2768
],
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-5",
"cachedResultName": "gpt-5"
},
"options": {}
},
"credentials": {
"openAiApi": {
"id": "ymMvgDroJHalwvRf",
"name": "OpenAI"
}
},
"typeVersion": 1.2
}
],
"pinData": {},
"connections": {
"Status Check": {
"main": [
[
{
"node": "Status Router",
"type": "main",
"index": 0
}
]
]
},
"Refiner Model": {
"ai_languageModel": [
[
{
"node": "Prompt Refiner",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Status Router": {
"main": [
[
{
"node": "Retrieve Video",
"type": "main",
"index": 0
}
],
[
{
"node": "Wait 60 Seconds",
"type": "main",
"index": 0
}
]
]
},
"Prompt Refiner": {
"main": [
[
{
"node": "Text-to-Video Call",
"type": "main",
"index": 0
}
]
]
},
"Retrieve Video": {
"main": [
[
{
"node": "Video Redirect",
"type": "main",
"index": 0
}
]
]
},
"Wait 60 Seconds": {
"main": [
[
{
"node": "Status Check",
"type": "main",
"index": 0
}
]
]
},
"Video Input Form": {
"main": [
[
{
"node": "Input Mode Router",
"type": "main",
"index": 0
}
]
]
},
"Input Mode Router": {
"main": [
[
{
"node": "Temp Image Upload",
"type": "main",
"index": 0
}
],
[
{
"node": "Prompt Refiner",
"type": "main",
"index": 0
}
]
]
},
"Temp Image Upload": {
"main": [
[
{
"node": "Image-to-Video Call",
"type": "main",
"index": 0
}
]
]
},
"JSON Output Parser": {
"ai_outputParser": [
[
{
"node": "Prompt Refiner",
"type": "ai_outputParser",
"index": 0
}
]
]
},
"Text-to-Video Call": {
"main": [
[
{
"node": "Wait 60 Seconds",
"type": "main",
"index": 0
}
]
]
},
"Image-to-Video Call": {
"main": [
[
{
"node": "Wait 60 Seconds",
"type": "main",
"index": 0
}
]
]
}
}
}常见问题
如何使用这个工作流?
复制上方的 JSON 配置代码,在您的 n8n 实例中创建新工作流并选择「从 JSON 导入」,粘贴配置后根据需要修改凭证设置即可。
这个工作流适合什么场景?
高级 - 内容创作, 多模态 AI
需要付费吗?
本工作流完全免费,您可以直接导入使用。但请注意,工作流中使用的第三方服务(如 OpenAI API)可能需要您自行付费。
相关工作流推荐
使用OpenAI和Fal.ai Minimax从文本提示生成歌词和音乐
使用OpenAI和Fal.ai Minimax从文本提示生成歌词和音乐
Set
Wait
Switch
+6
17 节点Daniel Nkencho
内容创作
WordPress博客自动化专业版(深度研究)v2.1市场
使用GPT-4o、Perplexity AI和多语言支持自动化SEO优化的博客创建
If
Set
Xml
+27
125 节点Daniel Ng
内容创作
VEO3 VSL 生成器:用于 Meta 发布的自动化视频工作流
使用 Google VEO3、Creatomate 字幕和 Facebook 发布功能生成 AI 视频销售信函
Jwt
Set
Wait
+11
30 节点LukaszB
内容创作
Apollo 数据抓取与触达流程 1 ✅
使用 Apollo、AI 解析和定时邮件跟进自动生成潜在客户
If
Code
Wait
+13
39 节点Deniz
内容创作
使用OpenAI和Firecrawl从产品URL创建AI生成的Meta广告活动
使用OpenAI和Firecrawl从产品URL创建AI生成的Meta广告活动
If
Set
Code
+15
40 节点Adam Crafts
内容创作
使用 GPT-4o、ElevenLabs 和 WaveSpeed 口型同步创建真实 UGC 视频广告
使用 GPT-4o、ElevenLabs 和 WaveSpeed 口型同步创建真实 UGC 视频广告
If
Set
Wait
+12
38 节点Muhammad Farooq Iqbal
内容创作
工作流信息
难度等级
高级
节点数量22
分类2
节点类型9
作者
Daniel Nkencho
@daniel-automatesAI Automation Consultant | Helping Business Owners Implement AI Systems for Growth and Lead Gen
外部链接
在 n8n.io 查看 →
分享此工作流