[Plantilla] Creador de ICP
Este es unMarket Research, AI Summarizationflujo de automatización del dominio deautomatización que contiene 10 nodos.Utiliza principalmente nodos como Code, GoogleDocs, FormTrigger, HttpRequest, ChainLlm. Usar GPT y Google Docs para generar perfiles de cliente ideales completos a partir de sitios web
- •Pueden requerirse credenciales de autenticación para la API de destino
- •Clave de API de OpenAI
Nodos utilizados (10)
{
"id": "oqHHvTSLFsPgBGdy",
"meta": {
"instanceId": "ec3694b688f9802e6ff1f61802ca6bc63a40ecfa76d32f2c71cfdf2978426fce",
"templateCredsSetupCompleted": true
},
"name": "[TEMPLATE] ICP Creator",
"tags": [],
"nodes": [
{
"id": "8566b8fd-65ca-4119-93fd-fa8e4bf9a7a2",
"name": "Rastrear sitio web",
"type": "n8n-nodes-base.httpRequest",
"position": [
208,
0
],
"parameters": {
"url": "https://api.firecrawl.dev/v2/crawl",
"method": "POST",
"options": {},
"jsonBody": "={\n \"url\": \"{{ $json['Website URL'] }}\",\n \"sitemap\": \"include\",\n \"crawlEntireDomain\": true,\n \"limit\": 20,\n \"prompt\": \"crawl entire website\",\n \"scrapeOptions\": {\n \"onlyMainContent\": true,\n \"maxAge\": 172800000,\n \"formats\": [\n \"markdown\"\n ]\n }\n} ",
"sendBody": true,
"sendHeaders": true,
"specifyBody": "json",
"headerParameters": {
"parameters": [
{
"name": "Authorization",
"value": "Bearer {{API_KEY}}"
}
]
}
},
"typeVersion": 4.2
},
{
"id": "8e4b4d46-279c-40a5-8f84-ec3654526c9e",
"name": "Extraer contenido del sitio web",
"type": "n8n-nodes-base.httpRequest",
"position": [
432,
0
],
"parameters": {
"url": "={{ $json.url }}",
"options": {},
"sendHeaders": true,
"headerParameters": {
"parameters": [
{
"name": "Authorization",
"value": "Bearer fc-afdcaee8b15c4f858109766aa71bac2e"
}
]
}
},
"retryOnFail": true,
"typeVersion": 4.2
},
{
"id": "2ddb507d-b910-433f-88f4-c4838465bbb5",
"name": "Creador de ICP",
"type": "@n8n/n8n-nodes-langchain.chainLlm",
"onError": "continueRegularOutput",
"position": [
656,
0
],
"parameters": {
"text": "=DATA INPUT\nEverything between <website_data> and </website_data> is the ONLY source of truth. Do not browse the web. Do not hallucinate. Quote the site when possible.\n\n<website_data>\n{{ JSON.stringify($json.data) }}\n</website_data>\n\nOUTPUT FORMAT (MARKDOWN)\n\nFormat the entire answer in normal Markdown (headings, lists, short tables).\n\nStart directly with the content. Do not wrap the whole answer in any code fence. Never write ```markdown in the beginning and ```\\n at the end, just start with the content.\n\nKeep long sentences out of tables. Tables are for keywords/short phrases/numbers only.\n\nWORKING RULES\n\nDetect context: Decide if the business is B2B, B2C, or Hybrid. If unclear, state the ambiguity and proceed with the most likely case.\n\nFacts vs. Inferences:\n\nTag direct facts with (Fact) and include a brief quote or field reference.\n\nTag deductions with (Inference) and add a one-line rationale.\n\nIf unknown: “Unknown — evidence not found.”\n\nSpecificity over fluff: Prefer concrete ranges and crisp qualifiers (e.g., “50–200 employees”, “Series B SaaS”, “EU/UK”).\n\nVariants: If multiple products/markets are present, create Primary and Secondary ICP variants and explain why.\n\nConfidence: Add a 0–100 confidence score per section with a one-line rationale.\n\nScope control: If data is thin, call it out explicitly and expand the Open Questions list rather than guessing.\n\nDELIVERABLES (produce ALL, in this exact order)\n\nA) Executive Summary (150–220 words)\n\nWhat the company sells (1–2 lines).\n\nWho it is best for (firmographic snapshot).\n\nWhy they buy (top 3 pains & triggers).\n\nHow to find & win them (channels + core message).\n\nKey uncertainties to validate next.\n\nB) One-Pager ICP (Narrative)\n\nProblem → Solution fit (jobs-to-be-done).\n\n“Why now” triggers (events that create urgency).\n\nDifferentiators vs. alternatives (bulleted).\n\nCritical requirements (e.g., security/compliance/integrations).\n\nRisks & exclusions (who is NOT a fit and why).\n\nC) Tiering & Lead Scoring (Concise)\n\nDefine T1 (perfect fit), T2 (good fit), T3 (opportunistic).\n\nProvide a scoring formula (0–100) using the weights from the JSON.\n\nGive 5–10 example filters for LinkedIn/Sales tools (industry, headcount, tech, geo, keywords).\n\nD) Demand Gen & ABM Plays (Practical)\n\nTop 3 channels and why (1–2 lines each).\n\n3 campaign ideas mapped to top pains/use cases.\n\n3 sales plays per key persona (opening line + suggested asset).\n\n3 activation metrics to monitor (e.g., %T1 leads, opp rate, win rate).\n\nE) Evidence Log\n\n5–10 strongest excerpts from the site (short quotes or field names). Tag each (Fact).\n\nF) Section Confidence\n\nFor each major section (A–F), include confidence 0–100 with a one-line rationale.\n\nPROCESS (how to work)\n\nParse <website_data>; if it’s HTML/JSON, extract product, audience, value props, pricing, industries, logos/case studies, integrations, careers page hints (size/stack), and press pages (stage).\n\nList facts vs. inferences explicitly.\n\nBuild Primary ICP; add Secondary if a materially different segment exists.\n\nConstruct scoring rubric and tier definitions tied to business realities in the data.\n\nPopulate open questions; propose how to validate quickly (calls, analytics, CRM fields).\n\nQUALITY BAR (self-check before finalizing)\n\nAre Facts/Inferences clearly labeled?\n\nAre negative ICP/exclusions present?\n\nIs the scoring rubric actionable and aligned with data?\n\nAre uncertainties and next steps explicit?\n\nAre tables succinct (no long sentences)?\n\nRETURN\nReturn only the final deliverables in Markdown (sections A→F). Do not include your internal notes or this instruction block.",
"messages": {
"messageValues": [
{
"message": "You are a senior B2B/B2C go-to-market analyst specializing in Ideal Customer Profiles (ICP), Account-Based Marketing (ABM), and sales enablement. Your job is to extract facts, infer patterns cautiously, and produce a rigorous, decision-ready ICP."
}
]
},
"promptType": "define"
},
"retryOnFail": true,
"typeVersion": 1.5,
"alwaysOutputData": false
},
{
"id": "8ea17362-a351-43c4-aedd-f610de45cc8b",
"name": "Modelo de chat OpenAI",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [
688,
176
],
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-4.1-mini"
},
"options": {}
},
"credentials": {
"openAiApi": {
"id": "SVy0dG502VH2c1j9",
"name": "OpenAi account"
}
},
"typeVersion": 1.2
},
{
"id": "21494f4c-4c24-4420-a921-373f65a8bd35",
"name": "Crear un documento",
"type": "n8n-nodes-base.googleDocs",
"position": [
1200,
0
],
"parameters": {
"title": "=ICP for {{ $('On form submission').item.json['Business Name'] }}",
"driveId": "sharedWithMe",
"folderId": "={{google_drive_folder_id}}"
},
"credentials": {
"googleDocsOAuth2Api": {
"id": "M5Z3DWLrDsl5ycru",
"name": "Google Docs account"
}
},
"executeOnce": true,
"typeVersion": 2
},
{
"id": "7d62c33d-6d67-4ff3-9008-a9cfaf28fe10",
"name": "Actualizar un documento",
"type": "n8n-nodes-base.httpRequest",
"onError": "continueRegularOutput",
"position": [
1424,
0
],
"parameters": {
"url": "=https://docs.googleapis.com/v1/documents/{{ $('Create a document').item.json.id }}:batchUpdate",
"method": "POST",
"options": {},
"jsonBody": "={{ JSON.stringify($('Markdown to Google Doc').item.json) }}",
"sendBody": true,
"specifyBody": "json",
"authentication": "predefinedCredentialType",
"nodeCredentialType": "googleDocsOAuth2Api"
},
"credentials": {
"googleDocsOAuth2Api": {
"id": "M5Z3DWLrDsl5ycru",
"name": "Google Docs account"
}
},
"typeVersion": 4.2
},
{
"id": "891d5c66-6c1d-4e02-bfff-db81c633a4b1",
"name": "Al enviar el formulario",
"type": "n8n-nodes-base.formTrigger",
"position": [
0,
0
],
"webhookId": "1f6b05db-6897-49d3-bdcd-13a9d005f5ef",
"parameters": {
"options": {
"appendAttribution": false,
"respondWithOptions": {
"values": {
"redirectUrl": "={{google_drive_folder_url}}",
"respondWith": "redirect"
}
}
},
"formTitle": "Create your ICP",
"formFields": {
"values": [
{
"html": "<p style=\"text-align:left; margin:0 0 18px 0;\"> Enter the company website URL and business name to generate a first‑draft Ideal Customer Profile (ICP) based on your site’s content. Once completed, you'll be redirected to a Google Drive folder containing your ICP document. </p> <p style=\"text-align:left; margin:0 0 12px 0;\"> Here is how the workflow works: </p> <ol style=\"text-align:left; margin:0; padding-left:20px;\"> <li style=\"text-align:left; margin:0 0 12px 0;\">Website is crawled and scraped (sitemap included, up to 20 pages, main content only).</li> <li style=\"text-align:left; margin:0 0 12px 0;\">AI ICP Creator analyzes on‑site content only and drafts the ICP (no external browsing, facts vs. inferences).</li> <li style=\"text-align:left; margin:0 0 12px 0;\">ICP includes: Executive Summary, One‑Pager ICP, Tiering & Lead Scoring, Demand Gen & ABM Plays, Evidence Log, Section Confidence.</li> <li style=\"text-align:left; margin:0 0 12px 0;\">Markdown output is converted into a formatted Google Doc.</li> <li style=\"text-align:left; margin:0 0 12px 0;\">Google Doc “ICP for <Business Name>” is created in Google Drive.</li> <li style=\"text-align:left; margin:0;\">You will be redirected to the Google Drive folder containing your ICP.</li> </ol>",
"fieldType": "html"
},
{
"fieldLabel": "Website URL",
"placeholder": "vertodigital.com",
"requiredField": true
},
{
"fieldLabel": "Business Name",
"placeholder": "VertoDigital",
"requiredField": true
}
]
},
"responseMode": "lastNode"
},
"typeVersion": 2.2
},
{
"id": "b39d7fd1-0c9b-491c-9b47-103d1a76bfff",
"name": "Markdown a Google Doc",
"type": "n8n-nodes-base.code",
"position": [
976,
0
],
"parameters": {
"jsCode": "// Markdown → Google Docs batchUpdate (headings/lists/inline + real tables; fixed delete ranges)\n// Input: items[0].json.text (Markdown)\n// Output: { json: { requests: [...] } }\n\nconst rawInput = items[0].json.text ?? \"\";\n\n/* =========================\n Helpers & Normalization\n ========================= */\nfunction hexToRgb(hex) {\n if (!hex || !hex.startsWith('#')) return null;\n const r = parseInt(hex.slice(1, 3), 16) / 255;\n const g = parseInt(hex.slice(3, 5), 16) / 255;\n const b = parseInt(hex.slice(5, 7), 16) / 255;\n return { red: r, green: g, blue: b };\n}\nfunction headingStyleForLevel(level) {\n const map = { 1: 'HEADING_1', 2: 'HEADING_2', 3: 'HEADING_3', 4: 'HEADING_4', 5: 'HEADING_5', 6: 'HEADING_6' };\n return map[level] || 'HEADING_1';\n}\n// Collapse 3+ blank lines → 2; trim trailing spaces\nfunction normalizeMarkdown(md) {\n return md\n .replace(/\\r\\n/g, '\\n')\n .replace(/[ \\t]+\\n/g, '\\n')\n .replace(/\\n{3,}/g, '\\n\\n')\n .replace(/[ \\t]+$/gm, '');\n}\n\n// Parse inline markdown for one line → { text, spans[] } (spans: { offsetStart, offsetEnd, bold?, italic?, link? })\nfunction parseInline(line) {\n let text = line;\n const spans = [];\n\n // Safety: handle legacy LINK markers\n if (text.includes('\\u0001LINKSTART\\u0001')) {\n let acc = '';\n for (let i = 0; i < text.length; ) {\n if (text.startsWith('\\u0001LINKSTART\\u0001', i)) {\n i += '\\u0001LINKSTART\\u0001'.length;\n const urlTag = '\\u0001LINKURL:';\n const endTag = '\\u0001LINKEND\\u0001';\n const urlIdx = text.indexOf(urlTag, i);\n const endIdx = text.indexOf(endTag, i);\n const label = text.slice(i, urlIdx);\n const url = text.slice(urlIdx + urlTag.length, endIdx);\n const start = acc.length;\n acc += label;\n spans.push({ offsetStart: start, offsetEnd: start + label.length, link: url });\n i = endIdx + endTag.length;\n } else acc += text[i++];\n }\n text = acc;\n }\n\n // Markdown links: [label](url)\n text = text.replace(/\\[([^\\]]+)\\]\\((https?:\\/\\/[^\\s)]+)\\)/g, (_m, label, url) => `\\u0002L${label}\\u0002U${url}\\u0002E`);\n\n // Bold+Italic ***text*** or ___text___\n text = text.replace(/(\\*{3}|_{3})([\\s\\S]*?)\\1/g, (_m, _w, inner) => `\\u0002BIS${inner}\\u0002BIE`);\n // Bold **text** or __text__\n text = text.replace(/(\\*{2}|__)([\\s\\S]*?)\\1/g, (_m, _w, inner) => `\\u0002BS${inner}\\u0002BE`);\n // Italic *text* or _text_\n text = text.replace(/(\\*|_)([\\s\\S]*?)\\1/g, (_m, _w, inner) => `\\u0002IS${inner}\\u0002IE`);\n\n // Decode markers → clean text + spans\n let clean = '';\n for (let i = 0; i < text.length; ) {\n if (text.startsWith('\\u0002L', i)) {\n i += 2;\n const uIdx = text.indexOf('\\u0002U', i);\n const eIdx = text.indexOf('\\u0002E', i);\n const label = text.slice(i, uIdx);\n const url = text.slice(uIdx + 2, eIdx);\n const start = clean.length;\n clean += label;\n spans.push({ offsetStart: start, offsetEnd: start + label.length, link: url });\n i = eIdx + 2;\n continue;\n }\n if (text.startsWith('\\u0002BIS', i)) {\n i += 4; const eIdx = text.indexOf('\\u0002BIE', i);\n const inner = text.slice(i, eIdx);\n const start = clean.length; clean += inner;\n spans.push({ offsetStart: start, offsetEnd: start + inner.length, bold: true, italic: true });\n i = eIdx + 4; continue;\n }\n if (text.startsWith('\\u0002BS', i)) {\n i += 3; const eIdx = text.indexOf('\\u0002BE', i);\n const inner = text.slice(i, eIdx);\n const start = clean.length; clean += inner;\n spans.push({ offsetStart: start, offsetEnd: start + inner.length, bold: true });\n i = eIdx + 3; continue;\n }\n if (text.startsWith('\\u0002IS', i)) {\n i += 3; const eIdx = text.indexOf('\\u0002IE', i);\n const inner = text.slice(i, eIdx);\n const start = clean.length; clean += inner;\n spans.push({ offsetStart: start, offsetEnd: start + inner.length, italic: true });\n i = eIdx + 3; continue;\n }\n clean += text[i++];\n }\n return { text: clean, spans };\n}\n\n/* =========================\n Table parsing utilities\n ========================= */\nfunction isTableSeparator(line) {\n if (!line) return null;\n const cleaned = line.trim();\n const noEdges = cleaned.replace(/^\\|/, '').replace(/\\|$/, '');\n const parts = noEdges.split('|').map(s => s.trim());\n if (!parts.length) return null;\n const aligns = [];\n for (const p of parts) {\n if (!/^-{3,}:?$|^:?-{3,}:?$/.test(p)) return null;\n const left = p.startsWith(':'), right = p.endsWith(':');\n aligns.push(left && right ? 'CENTER' : (!left && right ? 'END' : 'START'));\n }\n return aligns;\n}\nfunction parseTableRow(line) {\n if (!line) return null;\n const t = line.trim();\n if (!t.includes('|')) return null;\n if (/^`{3,}/.test(t) || /^\\s*(?:-{3,}|_{3,}|\\*{3,})\\s*$/.test(t)) return null;\n const noEdges = t.replace(/^\\|/, '').replace(/\\|$/, '');\n return noEdges.split('|').map(c => c.replace(/\\\\\\|/g, '|').trim());\n}\n\n/* =========================\n Block-level parser (with tables)\n ========================= */\nfunction parseMarkdownToModel(md) {\n const lines = normalizeMarkdown(md).split('\\n');\n\n let docText = '';\n const blocks = []; // { start, end, kind, level? }\n const inlineSpans = []; // non-table inline spans\n const listGroups = []; // contiguous list ranges\n const tableBlocks = []; // tables to materialize\n let cursor = 0;\n\n function addListLine(kind, start, end) {\n const last = listGroups[listGroups.length - 1];\n if (last && last.kind === kind && last.end === start) last.end = end;\n else listGroups.push({ kind, start, end });\n }\n\n for (let i = 0; i < lines.length; i++) {\n const raw = lines[i];\n\n // HR\n if (/^\\s*(?:-{3,}|_{3,}|\\*{3,})\\s*$/.test(raw)) {\n const hr = '────────────────────────\\n';\n const start = cursor; docText += hr; const end = cursor + hr.length;\n blocks.push({ start, end, kind: 'hr' });\n cursor = end; continue;\n }\n\n // Table?\n const header = parseTableRow(raw);\n const aligns = isTableSeparator(lines[i + 1] || '');\n if (header && aligns) {\n const start = cursor;\n docText += raw + '\\n'; cursor += (raw + '\\n').length;\n const sepLine = lines[i + 1];\n docText += sepLine + '\\n'; cursor += (sepLine + '\\n').length;\n\n const colCount = aligns.length;\n const rowsText = [];\n const rowsSpans = [];\n\n const headerParsed = header.slice(0, colCount);\n while (headerParsed.length < colCount) headerParsed.push('');\n const headerInline = headerParsed.map(c => parseInline(c));\n rowsText.push(headerInline.map(o => o.text));\n rowsSpans.push(headerInline.map(o => o.spans));\n\n let j = i + 2;\n while (j < lines.length) {\n const r = parseTableRow(lines[j]);\n if (!r) break;\n const normalized = r.slice(0, colCount);\n while (normalized.length < colCount) normalized.push('');\n const parsed = normalized.map(c => parseInline(c));\n rowsText.push(parsed.map(o => o.text));\n rowsSpans.push(parsed.map(o => o.spans));\n\n docText += lines[j] + '\\n';\n cursor += (lines[j] + '\\n').length;\n j++;\n }\n const end = cursor;\n\n tableBlocks.push({ start, end, colCount, aligns, rowsText, rowsSpans });\n blocks.push({ start, end, kind: 'table' });\n\n i = j - 1;\n continue;\n }\n\n // Heading\n const h = raw.match(/^(\\s{0,3})(#{1,6})\\s+(.*)$/);\n if (h) {\n const level = h[2].length;\n const { text, spans } = parseInline(h[3]);\n const para = text + '\\n'; const start = cursor;\n docText += para; const end = cursor + para.length;\n blocks.push({ start, end, kind: 'heading', level });\n spans.forEach(sp => inlineSpans.push({ start: start + sp.offsetStart, end: start + sp.offsetEnd, ...sp }));\n cursor = end; continue;\n }\n\n // Blockquote\n const bq = raw.match(/^\\s*>\\s?(.*)$/);\n if (bq) {\n const { text, spans } = parseInline(bq[1]);\n const para = text + '\\n'; const start = cursor;\n docText += para; const end = cursor + para.length;\n blocks.push({ start, end, kind: 'blockquote' });\n spans.forEach(sp => inlineSpans.push({ start: start + sp.offsetStart, end: start + sp.offsetEnd, ...sp }));\n cursor = end; continue;\n }\n\n // Numbered list\n const num = raw.match(/^\\s*\\d+\\.\\s+(.*)$/);\n if (num) {\n const { text, spans } = parseInline(num[1]);\n const para = text + '\\n'; const start = cursor;\n docText += para; const end = cursor + para.length;\n blocks.push({ start, end, kind: 'numbered' });\n addListLine('numbered', start, end);\n spans.forEach(sp => inlineSpans.push({ start: start + sp.offsetStart, end: start + sp.offsetEnd, ...sp }));\n cursor = end; continue;\n }\n\n // Bulleted list\n const bul = raw.match(/^\\s*([-*•])\\s+(.*)$/);\n if (bul) {\n const { text, spans } = parseInline(bul[2]);\n const para = text + '\\n'; const start = cursor;\n docText += para; const end = cursor + para.length;\n blocks.push({ start, end, kind: 'bulleted' });\n addListLine('bulleted', start, end);\n spans.forEach(sp => inlineSpans.push({ start: start + sp.offsetStart, end: start + sp.offsetEnd, ...sp }));\n cursor = end; continue;\n }\n\n // Paragraph\n {\n const { text, spans } = parseInline(raw);\n const para = text + '\\n'; const start = cursor;\n docText += para; const end = cursor + para.length;\n blocks.push({ start, end, kind: 'paragraph' });\n spans.forEach(sp => inlineSpans.push({ start: start + sp.offsetStart, end: start + sp.offsetEnd, ...sp }));\n cursor = end;\n }\n }\n\n return { docText, blocks, inlineSpans, listGroups, tableBlocks };\n}\n\n/* =========================\n Build requests (strict-safe ranges)\n ========================= */\nconst { docText, blocks, inlineSpans, listGroups, tableBlocks } = parseMarkdownToModel(rawInput);\n\n// Insert at 1 to skip the first section break at 0\nconst baseOffset = 1;\nconst docLen = docText.length;\nconst segmentEnd = baseOffset + docLen; // exclusive\n\n// General range: clamp to stay strictly < segment end (for style ops)\nfunction absRangeSafe(relStart, relEnd) {\n const start = Math.max(baseOffset + relStart, baseOffset);\n const desiredEnd = baseOffset + relEnd;\n const hardMax = segmentEnd - 1; // must be < segment end\n const end = Math.max(Math.min(desiredEnd, hardMax), start + 1);\n if (end <= start) return null;\n return { startIndex: start, endIndex: end };\n}\n// Exact range for DELETE (we want to remove the whole table text; it's always inside body)\nfunction absRangeExactForDelete(relStart, relEnd) {\n const start = baseOffset + relStart;\n let end = baseOffset + relEnd;\n if (end >= segmentEnd) end = segmentEnd - 1; // keep < segment end\n if (end <= start) return null;\n return { startIndex: start, endIndex: end };\n}\nfunction absIndex(rel) { return baseOffset + rel; }\n\nconst requests = [];\n\n/* 1) Insert full markdown text (including raw table text we’ll convert later) */\nrequests.push({ insertText: { location: { index: baseOffset }, text: docText } });\n\n/* 2) Headings & blockquotes (no global style pass) */\nfor (const b of blocks) {\n if (b.kind === 'heading') {\n const rng = absRangeSafe(b.start, b.end);\n if (!rng) continue;\n requests.push({\n updateParagraphStyle: {\n range: rng,\n paragraphStyle: {\n namedStyleType: headingStyleForLevel(b.level),\n alignment: 'START',\n spaceAbove: { magnitude: 0, unit: 'PT' },\n spaceBelow: { magnitude: 0, unit: 'PT' },\n indentStart: { magnitude: 0, unit: 'PT' },\n indentFirstLine: { magnitude: 0, unit: 'PT' }\n },\n fields: 'namedStyleType,alignment,spaceAbove,spaceBelow,indentStart,indentFirstLine'\n }\n });\n } else if (b.kind === 'blockquote') {\n const rng = absRangeSafe(b.start, b.end);\n if (!rng) continue;\n requests.push({\n updateParagraphStyle: {\n range: rng,\n paragraphStyle: {\n namedStyleType: 'NORMAL_TEXT',\n alignment: 'START',\n spaceAbove: { magnitude: 0, unit: 'PT' },\n spaceBelow: { magnitude: 0, unit: 'PT' },\n indentStart: { magnitude: 18, unit: 'PT' },\n indentFirstLine: { magnitude: 18, unit: 'PT' }\n },\n fields: 'namedStyleType,alignment,spaceAbove,spaceBelow,indentStart,indentFirstLine'\n }\n });\n requests.push({ updateTextStyle: { range: rng, textStyle: { italic: true }, fields: 'italic' } });\n }\n}\n\n/* 3) Lists → bullets */\nfor (const g of listGroups) {\n const rng = absRangeSafe(g.start, g.end);\n if (!rng) continue;\n requests.push({\n createParagraphBullets: {\n range: rng,\n bulletPreset: g.kind === 'numbered' ? 'NUMBERED_DECIMAL_ALPHA_ROMAN' : 'BULLET_DISC_CIRCLE_SQUARE'\n }\n });\n}\n\n/* 4) Inline styles for non-table text */\nfor (const sp of inlineSpans) {\n const rng = absRangeSafe(sp.start, sp.end);\n if (!rng) continue;\n const fields = [];\n const textStyle = {};\n if (sp.bold) { textStyle.bold = true; fields.push('bold'); }\n if (sp.italic) { textStyle.italic = true; fields.push('italic'); }\n if (sp.link) { textStyle.link = { url: sp.link }; fields.push('link'); }\n if (!fields.length) continue;\n requests.push({ updateTextStyle: { range: rng, textStyle, fields: fields.join(',') } });\n}\n\n/* 5) Convert Markdown tables → real Docs tables (process bottom-up to keep indices stable) */\nfunction buildTableInsertions(tableAbsIndex, rowsText /* 2D array */) {\n // Heuristic indexes that work for newly inserted tables: first cell content is a few chars after table start,\n // and each following cell is offset by ~2. We still reverse insertions so earlier inserts don't shift later targets.\n const cols = Math.max(...rowsText.map(r => r.length));\n let idx = tableAbsIndex + 5;\n const insertions = [];\n const cellStarts = rowsText.map(() => new Array(cols).fill(null));\n rowsText.forEach((row, i) => {\n const rowIndex = idx + (i === 0 ? 0 : 3) - 1;\n row.forEach((cellText, j) => {\n const cellIdx = rowIndex + j * 2;\n cellStarts[i][j] = cellIdx;\n insertions.push({ index: cellIdx, text: cellText, r: i, c: j });\n idx = cellIdx + 1;\n });\n if (row.length < cols) idx += (cols - row.length) * 2;\n });\n insertions.reverse();\n return { insertions, cellStarts, columns: cols };\n}\nfunction paragraphAlignmentRequest(start, end, align) {\n return {\n updateParagraphStyle: {\n range: { startIndex: start, endIndex: end },\n paragraphStyle: { alignment: align },\n fields: 'alignment'\n }\n };\n}\nconst headerTextBg = hexToRgb('#F3F4F6');\nconst tablesDesc = [...tableBlocks].sort((a, b) => b.start - a.start);\n\nfor (const tb of tablesDesc) {\n const absStart = absIndex(tb.start);\n\n // Delete raw Markdown table text EXACTLY (no leftover pipes/spaces)\n const del = absRangeExactForDelete(tb.start, tb.end);\n if (!del) continue;\n requests.push({ deleteContentRange: { range: del } });\n\n // Insert empty table at same location\n requests.push({ insertTable: { rows: tb.rowsText.length, columns: tb.colCount, location: { index: absStart } } });\n\n // Insert cell text (reverse order) and capture starts for styling\n const { insertions, cellStarts } = buildTableInsertions(absStart, tb.rowsText);\n for (const ins of insertions) {\n requests.push({ insertText: { location: { index: ins.index }, text: ins.text } });\n }\n\n // Header: bold + text background (no tableStartLocation usage)\n for (let j = 0; j < tb.colCount; j++) {\n const s = cellStarts[0][j];\n const t = tb.rowsText[0][j] || '';\n if (s != null && t.length) {\n requests.push({\n updateTextStyle: {\n range: { startIndex: s, endIndex: s + t.length },\n textStyle: { bold: true, backgroundColor: { color: { rgbColor: headerTextBg } } },\n fields: 'bold,backgroundColor'\n }\n });\n }\n }\n\n // Alignment per column on all rows\n for (let r = 0; r < tb.rowsText.length; r++) {\n for (let c = 0; c < tb.colCount; c++) {\n const s = cellStarts[r][c];\n const t = tb.rowsText[r][c] || '';\n if (s == null || !t.length) continue;\n requests.push(paragraphAlignmentRequest(s, s + t.length, tb.aligns[c] || 'START'));\n }\n }\n\n // Inline styles inside cells\n for (let r = 0; r < tb.rowsText.length; r++) {\n for (let c = 0; c < tb.colCount; c++) {\n const s = cellStarts[r][c];\n const spans = tb.rowsSpans[r][c] || [];\n for (const sp of spans) {\n const start = s + sp.offsetStart;\n const end = s + sp.offsetEnd;\n const fields = [];\n const textStyle = {};\n if (sp.bold) { textStyle.bold = true; fields.push('bold'); }\n if (sp.italic) { textStyle.italic = true; fields.push('italic'); }\n if (sp.link) { textStyle.link = { url: sp.link }; fields.push('link'); }\n if (fields.length) requests.push({ updateTextStyle: { range: { startIndex: start, endIndex: end }, textStyle, fields: fields.join(',') } });\n }\n }\n }\n}\n\nreturn { json: { requests } };\n"
},
"typeVersion": 2
},
{
"id": "4662a860-4f08-4148-be27-ff7bcb6bdfe1",
"name": "Nota adhesiva",
"type": "n8n-nodes-base.stickyNote",
"position": [
224,
-912
],
"parameters": {
"width": 1120,
"height": 800,
"content": "## 🟨 Overview — Create ICP from website content to Google Docs\n\n## Who’s it for\nGrowth, marketing, sales, and founder teams that want a **decision-ready Ideal Customer Profile (ICP)**—grounded in their own site content.\n\n## How it works / What it does\n- **On form submission** collects **Website URL** and **Business Name** and redirects to Google Drive Folder after the final node. \n- **Crawl and Scrape the Website Content** - crawls and scrape **20 pages** from the website. \n- **ICP Creator** builds a **Markdown ICP** with: \nA) **Executive Summary**\nB) **One-Pager ICP**\nC) **Tiering & Lead Scoring**\nD) **Demand Gen & ABM Plays**\nE) **Evidence Log**\nF) **Section Confidence**\n**Facts vs. Inferences**, confidence scores and tables. \n- **Markdown to Google Doc** converts Markdown to Google Docs `batchUpdate` requests. Then this is used in **Update a document** for updating the empty doc. \n- **Create a document** + **Update a document** generate **“ICP for <Business Name>”** in your Drive folder and apply formatting.\n\n## How to set up\n1) Add credentials: **Firecrawl** (Authorization header), **OpenAI** (Chat), **Google Docs OAuth2**. \n2) Replace placeholders: `{{API_KEY}}`, `{{google_drive_folder_id}}`, `{{google_drive_folder_url}}`. \n3) Publish and open the Form URL to test.\n\n## Requirements\nFirecrawl API key • OpenAI API key • Google account with access to the target Drive folder.\n\n## Resources\nGoogle OAuth2 Credentials Setup - https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ \nOpenAI API key - https://docs.n8n.io/integrations/builtin/credentials/openai/ \nFirecrawl API key - https://take.ms/lGcUp"
},
"typeVersion": 1
},
{
"id": "8b71cfce-65dc-4b4c-8a77-8cee0fb8c537",
"name": "Nota adhesiva1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-624,
-64
],
"parameters": {
"width": 528,
"height": 272,
"content": "## 🗒️ Setup checklist (do these before first run)\n\n- **On form submission → Respond with redirect**: set `={{google_drive_folder_url}}`. \n- **Create a document → folderId**: set `={{google_drive_folder_id}}`. \n- **FireCrawl headers**: `Authorization: Bearer {{API_KEY}}`. \n- **OpenAI Chat Model**: attach your OpenAI credential. \n- **Google Docs OAuth2**: attach to both Docs nodes - Create a document + Update a document. \n- Test with a real site (e.g., `vertodigital.com`) and verify the created Doc & redirect.\n"
},
"typeVersion": 1
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "a185e948-9835-4946-9cf7-315008e0973d",
"connections": {
"2ddb507d-b910-433f-88f4-c4838465bbb5": {
"main": [
[
{
"node": "b39d7fd1-0c9b-491c-9b47-103d1a76bfff",
"type": "main",
"index": 0
}
]
]
},
"8566b8fd-65ca-4119-93fd-fa8e4bf9a7a2": {
"main": [
[
{
"node": "8e4b4d46-279c-40a5-8f84-ec3654526c9e",
"type": "main",
"index": 0
}
]
]
},
"21494f4c-4c24-4420-a921-373f65a8bd35": {
"main": [
[
{
"node": "7d62c33d-6d67-4ff3-9008-a9cfaf28fe10",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "2ddb507d-b910-433f-88f4-c4838465bbb5",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"891d5c66-6c1d-4e02-bfff-db81c633a4b1": {
"main": [
[
{
"node": "8566b8fd-65ca-4119-93fd-fa8e4bf9a7a2",
"type": "main",
"index": 0
}
]
]
},
"b39d7fd1-0c9b-491c-9b47-103d1a76bfff": {
"main": [
[
{
"node": "21494f4c-4c24-4420-a921-373f65a8bd35",
"type": "main",
"index": 0
}
]
]
},
"8e4b4d46-279c-40a5-8f84-ec3654526c9e": {
"main": [
[
{
"node": "2ddb507d-b910-433f-88f4-c4838465bbb5",
"type": "main",
"index": 0
}
]
]
}
}
}¿Cómo usar este flujo de trabajo?
Copie el código de configuración JSON de arriba, cree un nuevo flujo de trabajo en su instancia de n8n y seleccione "Importar desde JSON", pegue la configuración y luego modifique la configuración de credenciales según sea necesario.
¿En qué escenarios es adecuado este flujo de trabajo?
Intermedio - Investigación de mercado, Resumen de IA
¿Es de pago?
Este flujo de trabajo es completamente gratuito, puede importarlo y usarlo directamente. Sin embargo, tenga en cuenta que los servicios de terceros utilizados en el flujo de trabajo (como la API de OpenAI) pueden requerir un pago por su cuenta.
Flujos de trabajo relacionados recomendados
Simeon Penev
@monkatagptI specialize in building AI workflows that streamline processes and reduce manual work. My main workflows are in the marketing field, but I can automate all kind of processes and integrate AI in them.
Compartir este flujo de trabajo