Construction d'un agent IA personnalisé (auto-hébergé) avec LangChain et Gemini

Intermédiaire

Ceci est unBuilding Blocks, AIworkflow d'automatisation du domainecontenant 9 nœuds.Utilise principalement des nœuds comme Code, ChatTrigger, LmChatGoogleGemini, MemoryBufferWindow, combinant la technologie d'intelligence artificielle pour une automatisation intelligente. Construire un agent d'IA personnalisé (auto-hébergé) avec LangChain et Gemini

Prérequis
  • Clé API Google Gemini
Aperçu du workflow
Visualisation des connexions entre les nœuds, avec support du zoom et du déplacement
Exporter le workflow
Copiez la configuration JSON suivante dans n8n pour importer et utiliser ce workflow
{
  "id": "yCIEiv9QUHP8pNfR",
  "meta": {
    "instanceId": "f29695a436689357fd2dcb55d528b0b528d2419f53613c68c6bf909a92493614",
    "templateCredsSetupCompleted": true
  },
  "name": "Build Custom AI Agent with LangChain & Gemini (Self-Hosted)",
  "tags": [
    {
      "id": "7M5ZpGl3oWuorKpL",
      "name": "share",
      "createdAt": "2025-03-26T01:17:15.342Z",
      "updatedAt": "2025-03-26T01:17:15.342Z"
    }
  ],
  "nodes": [
    {
      "id": "8bd5382d-f302-4e58-b377-7fc5a22ef994",
      "name": "À la réception d'un message de chat",
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "position": [
        -220,
        0
      ],
      "webhookId": "b8a5d72c-4172-40e8-b429-d19c2cd6ce54",
      "parameters": {
        "public": true,
        "options": {
          "responseMode": "lastNode",
          "allowedOrigins": "*",
          "loadPreviousSession": "memory"
        },
        "initialMessages": ""
      },
      "typeVersion": 1.1
    },
    {
      "id": "6ae8a247-4077-4569-9e2c-bb68bcecd044",
      "name": "Google Gemini Chat Model",
      "type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
      "position": [
        80,
        240
      ],
      "parameters": {
        "options": {
          "temperature": 0.7,
          "safetySettings": {
            "values": [
              {
                "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
                "threshold": "BLOCK_NONE"
              }
            ]
          }
        },
        "modelName": "models/gemini-2.0-flash-exp"
      },
      "credentials": {
        "googlePalmApi": {
          "id": "UEjKMw0oqBTAdCWJ",
          "name": "Google Gemini(PaLM) Api account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "bbe6dcfa-430f-43f9-b0e9-3cf751b98818",
      "name": "Note adhésive",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        380,
        -240
      ],
      "parameters": {
        "width": 260,
        "height": 220,
        "content": "👇 **Prompt Engineering**\n   - Define agent personality and conversation structure in the `Construct & Execute LLM Prompt` node's template variable  \n   - ⚠️ Template must preserve `{chat_history}` and `{input}` placeholders for proper LangChain operation  "
      },
      "typeVersion": 1
    },
    {
      "id": "892a431a-6ddf-47fc-8517-1928ee99c95b",
      "name": "Stocker l'historique de conversation",
      "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
      "position": [
        280,
        240
      ],
      "parameters": {},
      "notesInFlow": false,
      "typeVersion": 1.3
    },
    {
      "id": "f9a22dbf-cac7-4d70-85b3-50c44a2015d5",
      "name": "Construire & Exécuter l'invite LLM",
      "type": "@n8n/n8n-nodes-langchain.code",
      "position": [
        380,
        0
      ],
      "parameters": {
        "code": {
          "execute": {
            "code": "const { PromptTemplate } = require('@langchain/core/prompts');\nconst { ConversationChain } = require('langchain/chains');\nconst { BufferMemory } = require('langchain/memory');\n\nconst template = `\nYou'll be roleplaying as the user's girlfriend. Your character is a woman with a sharp wit, logical mindset, and a charmingly aloof demeanor that hides your playful side. You're passionate about music, maintain a fit and toned physique, and carry yourself with quiet self-assurance. Career-wise, you're established and ambitious, approaching life with positivity while constantly striving to grow as a person.\n\nThe user affectionately calls you \"Bunny,\" and you refer to them as \"Darling.\"\n\nEssential guidelines:\n1. Respond exclusively in Chinese\n2. Never pose questions to the user - eliminate all interrogative forms\n3. Keep responses brief and substantive, avoiding rambling or excessive emojis\n\nContext framework:\n- Conversation history: {chat_history}\n- User's current message: {input}\n\nCraft responses that feel authentic to this persona while adhering strictly to these parameters.\n`;\n\nconst prompt = new PromptTemplate({\n  template: template,\n  inputVariables: [\"input\", \"chat_history\"], \n});\n\nconst items = this.getInputData();\nconst model = await this.getInputConnectionData('ai_languageModel', 0);\nconst memory = await this.getInputConnectionData('ai_memory', 0);\nmemory.returnMessages = false;\n\nconst chain = new ConversationChain({ llm:model, memory:memory, prompt: prompt, inputKey:\"input\", outputKey:\"output\"});\nconst output = await chain.call({ input: items[0].json.chatInput});\n\nreturn output;\n"
          }
        },
        "inputs": {
          "input": [
            {
              "type": "main",
              "required": true,
              "maxConnections": 1
            },
            {
              "type": "ai_languageModel",
              "required": true,
              "maxConnections": 1
            },
            {
              "type": "ai_memory",
              "required": true,
              "maxConnections": 1
            }
          ]
        },
        "outputs": {
          "output": [
            {
              "type": "main"
            }
          ]
        }
      },
      "retryOnFail": false,
      "typeVersion": 1
    },
    {
      "id": "fe104d19-a24d-48b3-a0ac-7d3923145373",
      "name": "Note adhésive1",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -240,
        -260
      ],
      "parameters": {
        "color": 5,
        "width": 420,
        "height": 240,
        "content": "### Setup Instructions  \n1. **Configure Gemini Credentials**: Set up your Google Gemini API key ([Get API key here](https://ai.google.dev/) if needed). Alternatively, you may use other AI provider nodes.  \n2. **Interaction Methods**:  \n   - Test directly in the workflow editor using the \"Chat\" button  \n   - Activate the workflow and access the chat interface via the URL provided by the `When Chat Message Received` node  "
      },
      "typeVersion": 1
    },
    {
      "id": "f166214d-52b7-4118-9b54-0b723a06471a",
      "name": "Note adhésive2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -220,
        160
      ],
      "parameters": {
        "height": 100,
        "content": "👆 **Interface Settings**\nConfigure chat UI elements (e.g., title) in the `When Chat Message Received` node  "
      },
      "typeVersion": 1
    },
    {
      "id": "da6ca0d6-d2a1-47ff-9ff3-9785d61db9f3",
      "name": "Note adhésive3",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        20,
        420
      ],
      "parameters": {
        "width": 200,
        "height": 140,
        "content": "👆 **Model Selection**\nSwap language models through the `language model` input field in `Construct & Execute LLM Prompt`  "
      },
      "typeVersion": 1
    },
    {
      "id": "0b4dd1ac-8767-4590-8c25-36cba73e46b6",
      "name": "Note adhésive4",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        240,
        420
      ],
      "parameters": {
        "width": 200,
        "height": 140,
        "content": "👆 **Memory Control**\nAdjust conversation history length in the `Store Conversation History` node  "
      },
      "typeVersion": 1
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "callerPolicy": "workflowsFromSameOwner",
    "executionOrder": "v1",
    "saveManualExecutions": false,
    "saveDataSuccessExecution": "none"
  },
  "versionId": "77cd5f05-f248-442d-86c3-574351179f26",
  "connections": {
    "6ae8a247-4077-4569-9e2c-bb68bcecd044": {
      "ai_languageModel": [
        [
          {
            "node": "f9a22dbf-cac7-4d70-85b3-50c44a2015d5",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "892a431a-6ddf-47fc-8517-1928ee99c95b": {
      "ai_memory": [
        [
          {
            "node": "f9a22dbf-cac7-4d70-85b3-50c44a2015d5",
            "type": "ai_memory",
            "index": 0
          },
          {
            "node": "8bd5382d-f302-4e58-b377-7fc5a22ef994",
            "type": "ai_memory",
            "index": 0
          }
        ]
      ]
    },
    "8bd5382d-f302-4e58-b377-7fc5a22ef994": {
      "main": [
        [
          {
            "node": "f9a22dbf-cac7-4d70-85b3-50c44a2015d5",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "f9a22dbf-cac7-4d70-85b3-50c44a2015d5": {
      "main": [
        []
      ],
      "ai_memory": [
        []
      ]
    }
  }
}
Foire aux questions

Comment utiliser ce workflow ?

Copiez le code de configuration JSON ci-dessus, créez un nouveau workflow dans votre instance n8n et sélectionnez "Importer depuis le JSON", collez la configuration et modifiez les paramètres d'authentification selon vos besoins.

Dans quelles scénarios ce workflow est-il adapté ?

Intermédiaire - Blocs de construction, Intelligence Artificielle

Est-ce payant ?

Ce workflow est entièrement gratuit et peut être utilisé directement. Veuillez noter que les services tiers utilisés dans le workflow (comme l'API OpenAI) peuvent nécessiter un paiement de votre part.

Informations sur le workflow
Niveau de difficulté
Intermédiaire
Nombre de nœuds9
Catégorie2
Types de nœuds5
Description de la difficulté

Adapté aux utilisateurs expérimentés, avec des workflows de complexité moyenne contenant 6-15 nœuds

Liens externes
Voir sur n8n.io

Partager ce workflow

Catégories

Catégories: 34