Dialogar con las especificaciones de la API abierta de GitHub a través de RAG (Pinecone y OpenAI)

Avanzado

Este es unEngineering, AIflujo de automatización del dominio deautomatización que contiene 17 nodos.Utiliza principalmente nodos como HttpRequest, ManualTrigger, Agent, ChatTrigger, LmChatOpenAi, combinando tecnología de inteligencia artificial para lograr automatización inteligente. Dialogar con la documentación de la API de GitHub: un chatbot basado en RAG, usando Pinecone y OpenAI

Requisitos previos
  • Pueden requerirse credenciales de autenticación para la API de destino
  • Clave de API de OpenAI
  • Clave de API de Pinecone
Vista previa del flujo de trabajo
Visualización de las conexiones entre nodos, con soporte para zoom y panorámica
Exportar flujo de trabajo
Copie la siguiente configuración JSON en n8n para importar y usar este flujo de trabajo
{
  "id": "FD0bHNaehP3LzCNN",
  "meta": {
    "instanceId": "69133932b9ba8e1ef14816d0b63297bb44feb97c19f759b5d153ff6b0c59e18d"
  },
  "name": "Chat with GitHub OpenAPI Specification using RAG (Pinecone and OpenAI)",
  "tags": [],
  "nodes": [
    {
      "id": "362cb773-7540-4753-a401-e585cdf4af8a",
      "name": "Al hacer clic en 'Probar flujo de trabajo'",
      "type": "n8n-nodes-base.manualTrigger",
      "position": [
        0,
        0
      ],
      "parameters": {},
      "typeVersion": 1
    },
    {
      "id": "45470036-cae6-48d0-ac66-addc8999e776",
      "name": "Solicitud HTTP",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        300,
        0
      ],
      "parameters": {
        "url": "https://raw.githubusercontent.com/github/rest-api-description/refs/heads/main/descriptions/api.github.com/api.github.com.json",
        "options": {}
      },
      "typeVersion": 4.2
    },
    {
      "id": "a9e65897-52c9-4941-bf49-e1a659e442ef",
      "name": "Pinecone Almacén de vectores",
      "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone",
      "position": [
        520,
        0
      ],
      "parameters": {
        "mode": "insert",
        "options": {},
        "pineconeIndex": {
          "__rl": true,
          "mode": "list",
          "value": "n8n-demo",
          "cachedResultName": "n8n-demo"
        }
      },
      "credentials": {
        "pineconeApi": {
          "id": "bQTNry52ypGLqt47",
          "name": "PineconeApi account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "c2a2354b-5457-4ceb-abfc-9a58e8593b81",
      "name": "Default Data Loader",
      "type": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
      "position": [
        660,
        180
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1
    },
    {
      "id": "7338d9ea-ae8f-46eb-807f-a15dc7639fc9",
      "name": "Recursive Character Text Splitter",
      "type": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
      "position": [
        740,
        360
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1
    },
    {
      "id": "44fd7a59-f208-4d5d-a22d-e9f8ca9badf1",
      "name": "Al recibir mensaje de chat",
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "position": [
        -20,
        760
      ],
      "webhookId": "089e38ab-4eee-4c34-aa5d-54cf4a8f53b7",
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.1
    },
    {
      "id": "51d819d6-70ff-428d-aa56-1d7e06490dee",
      "name": "Agente IA",
      "type": "@n8n/n8n-nodes-langchain.agent",
      "position": [
        320,
        760
      ],
      "parameters": {
        "options": {
          "systemMessage": "You are a helpful assistant providing information about the GitHub API and how to use it based on the OpenAPI V3 specifications."
        }
      },
      "typeVersion": 1.7
    },
    {
      "id": "aed548bf-7083-44ad-a3e0-163dee7423ef",
      "name": "Modelo de chat OpenAI",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        220,
        980
      ],
      "parameters": {
        "options": {}
      },
      "credentials": {
        "openAiApi": {
          "id": "tQLWnWRzD8aebYvp",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1.1
    },
    {
      "id": "dfe9f356-2225-4f4b-86c7-e56a230b4193",
      "name": "Window Buffer Memoria",
      "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
      "position": [
        420,
        1020
      ],
      "parameters": {},
      "typeVersion": 1.3
    },
    {
      "id": "4cf672ee-13b8-4355-b8e0-c2e7381671bc",
      "name": "Almacén de vectores Tool",
      "type": "@n8n/n8n-nodes-langchain.toolVectorStore",
      "position": [
        580,
        980
      ],
      "parameters": {
        "name": "GitHub_OpenAPI_Specification",
        "description": "Use this tool to get information about the GitHub   API. This database contains OpenAPI v3 specifications."
      },
      "typeVersion": 1
    },
    {
      "id": "1df7fb85-9d4a-4db5-9bed-41d28e2e4643",
      "name": "Modelo de chat OpenAI1",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        840,
        1160
      ],
      "parameters": {
        "options": {}
      },
      "credentials": {
        "openAiApi": {
          "id": "tQLWnWRzD8aebYvp",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1.1
    },
    {
      "id": "7b52ef7a-5935-451e-8747-efe16ce288af",
      "name": "Nota adhesiva",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -40,
        -260
      ],
      "parameters": {
        "width": 640,
        "height": 200,
        "content": "## Indexing content in the vector database\nThis part of the workflow is responsible for extracting content, generating embeddings and sending them to the Pinecone vector store.\n\nIt requests the OpenAPI specifications from GitHub using a HTTP request. Then, it splits the file in chunks, generating embeddings for each chunk using OpenAI, and saving them in Pinecone vector DB."
      },
      "typeVersion": 1
    },
    {
      "id": "3508d602-56d4-4818-84eb-ca75cdeec1d0",
      "name": "Nota adhesiva1",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -20,
        560
      ],
      "parameters": {
        "width": 580,
        "content": "## Querying and response generation \n\nThis part of the workflow is responsible for the chat interface, querying the vector store and generating relevant responses.\n\nIt uses OpenAI GPT 4o-mini to generate responses."
      },
      "typeVersion": 1
    },
    {
      "id": "5a9808ef-4edd-4ec9-ba01-2fe50b2dbf4b",
      "name": "Generate User Query Embedding",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        480,
        1400
      ],
      "parameters": {
        "options": {}
      },
      "credentials": {
        "openAiApi": {
          "id": "tQLWnWRzD8aebYvp",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "f703dc8e-9d4b-45e3-8994-789b3dfe8631",
      "name": "Pinecone Almacén de vectores (Querying)",
      "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone",
      "position": [
        440,
        1220
      ],
      "parameters": {
        "options": {},
        "pineconeIndex": {
          "__rl": true,
          "mode": "list",
          "value": "n8n-demo",
          "cachedResultName": "n8n-demo"
        }
      },
      "credentials": {
        "pineconeApi": {
          "id": "bQTNry52ypGLqt47",
          "name": "PineconeApi account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "ea64a7a5-1fa5-4938-83a9-271929733a8e",
      "name": "Generate Incrustaciones",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        480,
        220
      ],
      "parameters": {
        "options": {}
      },
      "credentials": {
        "openAiApi": {
          "id": "tQLWnWRzD8aebYvp",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "65cbd4e3-91f6-441a-9ef1-528c3019e238",
      "name": "Nota adhesiva2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -820,
        -260
      ],
      "parameters": {
        "width": 620,
        "height": 320,
        "content": "## RAG workflow in n8n\n\nThis is an example of how to use RAG techniques to create a chatbot with n8n. It is an API documentation chatbot that can answer questions about the GitHub API. It uses OpenAI for generating embeddings, the gpt-4o-mini LLM for generating responses and Pinecone as a vector database.\n\n### Before using this template\n* create OpenAI and Pinecone accounts\n* obtain API keys OpenAI and Pinecone \n* configure credentials in n8n for both\n* ensure you have a Pinecone index named \"n8n-demo\" or adjust the workflow accordingly."
      },
      "typeVersion": 1
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "versionId": "2908105f-c20c-4183-bb9d-26e3559b9911",
  "connections": {
    "HTTP Request": {
      "main": [
        [
          {
            "node": "Pinecone Vector Store",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Chat Model": {
      "ai_languageModel": [
        [
          {
            "node": "AI Agent",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "Vector Store Tool": {
      "ai_tool": [
        [
          {
            "node": "AI Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Chat Model1": {
      "ai_languageModel": [
        [
          {
            "node": "Vector Store Tool",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "c2a2354b-5457-4ceb-abfc-9a58e8593b81": {
      "ai_document": [
        [
          {
            "node": "Pinecone Vector Store",
            "type": "ai_document",
            "index": 0
          }
        ]
      ]
    },
    "Generate Embeddings": {
      "ai_embedding": [
        [
          {
            "node": "Pinecone Vector Store",
            "type": "ai_embedding",
            "index": 0
          }
        ]
      ]
    },
    "Window Buffer Memory": {
      "ai_memory": [
        [
          {
            "node": "AI Agent",
            "type": "ai_memory",
            "index": 0
          }
        ]
      ]
    },
    "44fd7a59-f208-4d5d-a22d-e9f8ca9badf1": {
      "main": [
        [
          {
            "node": "AI Agent",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "5a9808ef-4edd-4ec9-ba01-2fe50b2dbf4b": {
      "ai_embedding": [
        [
          {
            "node": "Pinecone Vector Store (Querying)",
            "type": "ai_embedding",
            "index": 0
          }
        ]
      ]
    },
    "Pinecone Vector Store (Querying)": {
      "ai_vectorStore": [
        [
          {
            "node": "Vector Store Tool",
            "type": "ai_vectorStore",
            "index": 0
          }
        ]
      ]
    },
    "7338d9ea-ae8f-46eb-807f-a15dc7639fc9": {
      "ai_textSplitter": [
        [
          {
            "node": "c2a2354b-5457-4ceb-abfc-9a58e8593b81",
            "type": "ai_textSplitter",
            "index": 0
          }
        ]
      ]
    },
    "362cb773-7540-4753-a401-e585cdf4af8a": {
      "main": [
        [
          {
            "node": "HTTP Request",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}
Preguntas frecuentes

¿Cómo usar este flujo de trabajo?

Copie el código de configuración JSON de arriba, cree un nuevo flujo de trabajo en su instancia de n8n y seleccione "Importar desde JSON", pegue la configuración y luego modifique la configuración de credenciales según sea necesario.

¿En qué escenarios es adecuado este flujo de trabajo?

Avanzado - Ingeniería, Inteligencia Artificial

¿Es de pago?

Este flujo de trabajo es completamente gratuito, puede importarlo y usarlo directamente. Sin embargo, tenga en cuenta que los servicios de terceros utilizados en el flujo de trabajo (como la API de OpenAI) pueden requerir un pago por su cuenta.

Información del flujo de trabajo
Nivel de dificultad
Avanzado
Número de nodos17
Categoría2
Tipos de nodos12
Descripción de la dificultad

Adecuado para usuarios avanzados, flujos de trabajo complejos con 16+ nodos

Autor
Mihai Farcas

Mihai Farcas

@mihailtd

Full-stack developer with 5+ years streamlining healthcare processes. Proficient in NodeJS, VueJS, MongoDB, PostgreSQL, Kubernetes, and n8n. Ready to optimize your workflows – book a consult via my link.

Enlaces externos
Ver en n8n.io

Compartir este flujo de trabajo

Categorías

Categorías: 34