Mit GitHub OpenAPI-Spezifikationen via RAG (Pinecone und OpenAI) chatten

Experte

Dies ist ein Engineering, AI-Bereich Automatisierungsworkflow mit 17 Nodes. Hauptsächlich werden HttpRequest, ManualTrigger, Agent, ChatTrigger, LmChatOpenAi und andere Nodes verwendet, kombiniert mit KI-Technologie für intelligente Automatisierung. Mit GitHub-API-Dokumenten sprechen: RAG-basierter Chat-Bot, der Pinecone und OpenAI verwendet

Voraussetzungen
  • Möglicherweise sind Ziel-API-Anmeldedaten erforderlich
  • OpenAI API Key
  • Pinecone API Key
Workflow-Vorschau
Visualisierung der Node-Verbindungen, mit Zoom und Pan
Workflow exportieren
Kopieren Sie die folgende JSON-Konfiguration und importieren Sie sie in n8n
{
  "id": "FD0bHNaehP3LzCNN",
  "meta": {
    "instanceId": "69133932b9ba8e1ef14816d0b63297bb44feb97c19f759b5d153ff6b0c59e18d"
  },
  "name": "Chat with GitHub OpenAPI Specification using RAG (Pinecone and OpenAI)",
  "tags": [],
  "nodes": [
    {
      "id": "362cb773-7540-4753-a401-e585cdf4af8a",
      "name": "Bei Klick auf 'Workflow testen'",
      "type": "n8n-nodes-base.manualTrigger",
      "position": [
        0,
        0
      ],
      "parameters": {},
      "typeVersion": 1
    },
    {
      "id": "45470036-cae6-48d0-ac66-addc8999e776",
      "name": "HTTP-Anfrage",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        300,
        0
      ],
      "parameters": {
        "url": "https://raw.githubusercontent.com/github/rest-api-description/refs/heads/main/descriptions/api.github.com/api.github.com.json",
        "options": {}
      },
      "typeVersion": 4.2
    },
    {
      "id": "a9e65897-52c9-4941-bf49-e1a659e442ef",
      "name": "Pinecone Vektorspeicher",
      "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone",
      "position": [
        520,
        0
      ],
      "parameters": {
        "mode": "insert",
        "options": {},
        "pineconeIndex": {
          "__rl": true,
          "mode": "list",
          "value": "n8n-demo",
          "cachedResultName": "n8n-demo"
        }
      },
      "credentials": {
        "pineconeApi": {
          "id": "bQTNry52ypGLqt47",
          "name": "PineconeApi account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "c2a2354b-5457-4ceb-abfc-9a58e8593b81",
      "name": "Standard-Datenlader",
      "type": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
      "position": [
        660,
        180
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1
    },
    {
      "id": "7338d9ea-ae8f-46eb-807f-a15dc7639fc9",
      "name": "Rekursiver Zeichentext-Splitter",
      "type": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
      "position": [
        740,
        360
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1
    },
    {
      "id": "44fd7a59-f208-4d5d-a22d-e9f8ca9badf1",
      "name": "Bei Empfang einer Chat-Nachricht",
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "position": [
        -20,
        760
      ],
      "webhookId": "089e38ab-4eee-4c34-aa5d-54cf4a8f53b7",
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.1
    },
    {
      "id": "51d819d6-70ff-428d-aa56-1d7e06490dee",
      "name": "KI-Agent",
      "type": "@n8n/n8n-nodes-langchain.agent",
      "position": [
        320,
        760
      ],
      "parameters": {
        "options": {
          "systemMessage": "You are a helpful assistant providing information about the GitHub API and how to use it based on the OpenAPI V3 specifications."
        }
      },
      "typeVersion": 1.7
    },
    {
      "id": "aed548bf-7083-44ad-a3e0-163dee7423ef",
      "name": "OpenAI-Chat-Modell",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        220,
        980
      ],
      "parameters": {
        "options": {}
      },
      "credentials": {
        "openAiApi": {
          "id": "tQLWnWRzD8aebYvp",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1.1
    },
    {
      "id": "dfe9f356-2225-4f4b-86c7-e56a230b4193",
      "name": "Fensterpuffer-Speicher",
      "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
      "position": [
        420,
        1020
      ],
      "parameters": {},
      "typeVersion": 1.3
    },
    {
      "id": "4cf672ee-13b8-4355-b8e0-c2e7381671bc",
      "name": "Vektorspeicher Tool",
      "type": "@n8n/n8n-nodes-langchain.toolVectorStore",
      "position": [
        580,
        980
      ],
      "parameters": {
        "name": "GitHub_OpenAPI_Specification",
        "description": "Use this tool to get information about the GitHub   API. This database contains OpenAPI v3 specifications."
      },
      "typeVersion": 1
    },
    {
      "id": "1df7fb85-9d4a-4db5-9bed-41d28e2e4643",
      "name": "OpenAI-Chat-Modell1",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        840,
        1160
      ],
      "parameters": {
        "options": {}
      },
      "credentials": {
        "openAiApi": {
          "id": "tQLWnWRzD8aebYvp",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1.1
    },
    {
      "id": "7b52ef7a-5935-451e-8747-efe16ce288af",
      "name": "Haftnotiz",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -40,
        -260
      ],
      "parameters": {
        "width": 640,
        "height": 200,
        "content": "## Indexing content in the vector database\nThis part of the workflow is responsible for extracting content, generating embeddings and sending them to the Pinecone vector store.\n\nIt requests the OpenAPI specifications from GitHub using a HTTP request. Then, it splits the file in chunks, generating embeddings for each chunk using OpenAI, and saving them in Pinecone vector DB."
      },
      "typeVersion": 1
    },
    {
      "id": "3508d602-56d4-4818-84eb-ca75cdeec1d0",
      "name": "Haftnotiz1",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -20,
        560
      ],
      "parameters": {
        "width": 580,
        "content": "## Querying and response generation \n\nThis part of the workflow is responsible for the chat interface, querying the vector store and generating relevant responses.\n\nIt uses OpenAI GPT 4o-mini to generate responses."
      },
      "typeVersion": 1
    },
    {
      "id": "5a9808ef-4edd-4ec9-ba01-2fe50b2dbf4b",
      "name": "Benutzerabfrage-Einbettung generieren",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        480,
        1400
      ],
      "parameters": {
        "options": {}
      },
      "credentials": {
        "openAiApi": {
          "id": "tQLWnWRzD8aebYvp",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "f703dc8e-9d4b-45e3-8994-789b3dfe8631",
      "name": "Pinecone Vektorspeicher (Abfrage)",
      "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone",
      "position": [
        440,
        1220
      ],
      "parameters": {
        "options": {},
        "pineconeIndex": {
          "__rl": true,
          "mode": "list",
          "value": "n8n-demo",
          "cachedResultName": "n8n-demo"
        }
      },
      "credentials": {
        "pineconeApi": {
          "id": "bQTNry52ypGLqt47",
          "name": "PineconeApi account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "ea64a7a5-1fa5-4938-83a9-271929733a8e",
      "name": "Einbettungen generieren",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        480,
        220
      ],
      "parameters": {
        "options": {}
      },
      "credentials": {
        "openAiApi": {
          "id": "tQLWnWRzD8aebYvp",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "65cbd4e3-91f6-441a-9ef1-528c3019e238",
      "name": "Haftnotiz2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -820,
        -260
      ],
      "parameters": {
        "width": 620,
        "height": 320,
        "content": "## RAG workflow in n8n\n\nThis is an example of how to use RAG techniques to create a chatbot with n8n. It is an API documentation chatbot that can answer questions about the GitHub API. It uses OpenAI for generating embeddings, the gpt-4o-mini LLM for generating responses and Pinecone as a vector database.\n\n### Before using this template\n* create OpenAI and Pinecone accounts\n* obtain API keys OpenAI and Pinecone \n* configure credentials in n8n for both\n* ensure you have a Pinecone index named \"n8n-demo\" or adjust the workflow accordingly."
      },
      "typeVersion": 1
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "versionId": "2908105f-c20c-4183-bb9d-26e3559b9911",
  "connections": {
    "HTTP Request": {
      "main": [
        [
          {
            "node": "Pinecone Vector Store",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Chat Model": {
      "ai_languageModel": [
        [
          {
            "node": "51d819d6-70ff-428d-aa56-1d7e06490dee",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "Vector Store Tool": {
      "ai_tool": [
        [
          {
            "node": "51d819d6-70ff-428d-aa56-1d7e06490dee",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Chat Model1": {
      "ai_languageModel": [
        [
          {
            "node": "Vector Store Tool",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "c2a2354b-5457-4ceb-abfc-9a58e8593b81": {
      "ai_document": [
        [
          {
            "node": "Pinecone Vector Store",
            "type": "ai_document",
            "index": 0
          }
        ]
      ]
    },
    "ea64a7a5-1fa5-4938-83a9-271929733a8e": {
      "ai_embedding": [
        [
          {
            "node": "Pinecone Vector Store",
            "type": "ai_embedding",
            "index": 0
          }
        ]
      ]
    },
    "dfe9f356-2225-4f4b-86c7-e56a230b4193": {
      "ai_memory": [
        [
          {
            "node": "51d819d6-70ff-428d-aa56-1d7e06490dee",
            "type": "ai_memory",
            "index": 0
          }
        ]
      ]
    },
    "44fd7a59-f208-4d5d-a22d-e9f8ca9badf1": {
      "main": [
        [
          {
            "node": "51d819d6-70ff-428d-aa56-1d7e06490dee",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "5a9808ef-4edd-4ec9-ba01-2fe50b2dbf4b": {
      "ai_embedding": [
        [
          {
            "node": "Pinecone Vector Store (Abfrage)",
            "type": "ai_embedding",
            "index": 0
          }
        ]
      ]
    },
    "Pinecone Vector Store (Abfrage)": {
      "ai_vectorStore": [
        [
          {
            "node": "Vector Store Tool",
            "type": "ai_vectorStore",
            "index": 0
          }
        ]
      ]
    },
    "7338d9ea-ae8f-46eb-807f-a15dc7639fc9": {
      "ai_textSplitter": [
        [
          {
            "node": "c2a2354b-5457-4ceb-abfc-9a58e8593b81",
            "type": "ai_textSplitter",
            "index": 0
          }
        ]
      ]
    },
    "362cb773-7540-4753-a401-e585cdf4af8a": {
      "main": [
        [
          {
            "node": "HTTP Request",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}
Häufig gestellte Fragen

Wie verwende ich diesen Workflow?

Kopieren Sie den obigen JSON-Code, erstellen Sie einen neuen Workflow in Ihrer n8n-Instanz und wählen Sie "Aus JSON importieren". Fügen Sie die Konfiguration ein und passen Sie die Anmeldedaten nach Bedarf an.

Für welche Szenarien ist dieser Workflow geeignet?

Experte - Engineering, Künstliche Intelligenz

Ist es kostenpflichtig?

Dieser Workflow ist völlig kostenlos. Beachten Sie jedoch, dass Drittanbieterdienste (wie OpenAI API), die im Workflow verwendet werden, möglicherweise kostenpflichtig sind.

Workflow-Informationen
Schwierigkeitsgrad
Experte
Anzahl der Nodes17
Kategorie2
Node-Typen12
Schwierigkeitsbeschreibung

Für fortgeschrittene Benutzer, komplexe Workflows mit 16+ Nodes

Autor
Mihai Farcas

Mihai Farcas

@mihailtd

Full-stack developer with 5+ years streamlining healthcare processes. Proficient in NodeJS, VueJS, MongoDB, PostgreSQL, Kubernetes, and n8n. Ready to optimize your workflows – book a consult via my link.

Externe Links
Auf n8n.io ansehen

Diesen Workflow teilen

Kategorien

Kategorien: 34