Comparaison de modèles d'IA avec l'API Nvidia : Qwen, DeepSeek, Seed-OSS et Nemotron

Intermédiaire

Ceci est uncontenant 11 nœuds.Utilise principalement des nœuds comme Set, Merge, Switch, Webhook, HttpRequest. Comparer les modèles d'IA avec l'API Nvidia : Qwen, DeepSeek, Seed-OSS et Nemotron

Prérequis
  • Point de terminaison HTTP Webhook (généré automatiquement par n8n)
  • Peut nécessiter les informations d'identification d'authentification de l'API cible

Catégorie

-
Aperçu du workflow
Visualisation des connexions entre les nœuds, avec support du zoom et du déplacement
Exporter le workflow
Copiez la configuration JSON suivante dans n8n pour importer et utiliser ce workflow
{
  "id": "vwBMikFazJ8dTN7C",
  "meta": {
    "instanceId": "b91e510ebae4127f953fd2f5f8d40d58ca1e71c746d4500c12ae86aad04c1502",
    "templateCredsSetupCompleted": true
  },
  "name": "Compare AI Models with Nvidia API: Qwen, DeepSeek, Seed-OSS & Nemotron",
  "tags": [],
  "nodes": [
    {
      "id": "2fd77eab-0817-4d39-a206-4506b5373765",
      "name": "Webhook Trigger",
      "type": "n8n-nodes-base.webhook",
      "position": [
        -144,
        -528
      ],
      "webhookId": "6737b4b1-3c2f-47b9-89ff-a012c1fa4f29",
      "parameters": {
        "path": "6737b4b1-3c2f-47b9-89ff-a012c1fa4f29",
        "options": {},
        "httpMethod": "POST",
        "responseMode": "responseNode"
      },
      "typeVersion": 2.1
    },
    {
      "id": "1f78059c-f7a8-493c-886e-05047d83a7b4",
      "name": "Note",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -1072,
        -848
      ],
      "parameters": {
        "width": 864,
        "height": 944,
        "content": "# Compare AI Models with Nvidia API: Qwen, DeepSeek, Seed-OSS & Nemotron\n\n## Overview\n- Queries four AI models simultaneously via Nvidia's API in 2-3 seconds—4x faster than sequential processing. Perfect for ensemble intelligence, model comparison, or redundancy.\n\n\n## How It Works\n- Webhook Trigger receives queries\n- AI Router distributes to four parallel branches: Qwen2, SyncGenInstruct, DeepSeek-v3.1, and Nvidia Nemotron\n- Merge Node aggregates responses (continues with partial results on timeout)\n- Format Response structures output\n- Webhook Response returns JSON with all model outputs\n\n## Prerequisites\n\n- Nvidia API key from [build.nvidia.com](https://build.nvidia.com) (free tier available)\n- n8n v1.0.0+ with HTTP access\n- Model access in Nvidia dashboard\n\n## Setup\n\n1. Import workflow JSON\n2. Configure HTTP nodes: Authentication → Header Auth → `Authorization: Bearer YOUR_TOKEN_HERE`\n3. Activate workflow and test\n\n## Customization\n\nAdjust temperature/max_tokens in HTTP nodes, add/remove models by duplicating nodes, change primary response selection in Format node, or add Redis caching for frequent queries.\n\n## Use Cases\n\nMulti-model chatbots, A/B testing, code review, research assistance, and production systems with AI fallback.\n"
      },
      "typeVersion": 1
    },
    {
      "id": "e7f74b77-470b-49e4-a191-577afda45296",
      "name": "Note4",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -192,
        -848
      ],
      "parameters": {
        "color": 3,
        "width": 1312,
        "height": 784,
        "content": ""
      },
      "typeVersion": 1
    },
    {
      "id": "8a0ca7d2-f4c0-4a95-9a7a-63c9d40ef77e",
      "name": "Formater la réponse",
      "type": "n8n-nodes-base.set",
      "position": [
        720,
        -544
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "bbfd9a05-0e6c-44cf-80e2-2a79ecb3f67a",
              "name": "choices[0].message.content",
              "type": "string",
              "value": "={{ $json.choices[0].message.content }}"
            }
          ]
        }
      },
      "typeVersion": 3.4
    },
    {
      "id": "20e9c15e-cd3d-4624-8620-5e100081bab1",
      "name": "Envoyer les réponses agrégées des modèles d'IA",
      "type": "n8n-nodes-base.respondToWebhook",
      "position": [
        944,
        -544
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.4
    },
    {
      "id": "0b86c542-74ce-4456-b025-07025e6f57a7",
      "name": "Fusionner les modèles d'IA",
      "type": "n8n-nodes-base.merge",
      "position": [
        528,
        -576
      ],
      "parameters": {
        "numberInputs": 4
      },
      "typeVersion": 3.2
    },
    {
      "id": "556f837e-5958-4121-9142-f3a05b560190",
      "name": "Routeur de modèles d'IA",
      "type": "n8n-nodes-base.switch",
      "position": [
        80,
        -576
      ],
      "parameters": {
        "rules": {
          "values": [
            {
              "conditions": {
                "options": {
                  "version": 2,
                  "leftValue": "",
                  "caseSensitive": true,
                  "typeValidation": "strict"
                },
                "combinator": "and",
                "conditions": [
                  {
                    "id": "8c79834b-efde-4096-8a97-687dbaac1eaa",
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json['AI Model'] }}",
                    "rightValue": "1"
                  }
                ]
              }
            },
            {
              "conditions": {
                "options": {
                  "version": 2,
                  "leftValue": "",
                  "caseSensitive": true,
                  "typeValidation": "strict"
                },
                "combinator": "and",
                "conditions": [
                  {
                    "id": "6f423cc4-08e3-41aa-8c5a-40a2d37a248d",
                    "operator": {
                      "name": "filter.operator.equals",
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json['AI Model'] }}",
                    "rightValue": "2"
                  }
                ]
              }
            },
            {
              "conditions": {
                "options": {
                  "version": 2,
                  "leftValue": "",
                  "caseSensitive": true,
                  "typeValidation": "strict"
                },
                "combinator": "and",
                "conditions": [
                  {
                    "id": "b8ba2c94-78d3-4325-8dda-e139d2dad24d",
                    "operator": {
                      "name": "filter.operator.equals",
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json['AI Model'] }}",
                    "rightValue": "3"
                  }
                ]
              }
            },
            {
              "conditions": {
                "options": {
                  "version": 2,
                  "leftValue": "",
                  "caseSensitive": true,
                  "typeValidation": "strict"
                },
                "combinator": "and",
                "conditions": [
                  {
                    "id": "0d1a15d3-047f-4489-896e-af2c079de4ae",
                    "operator": {
                      "name": "filter.operator.equals",
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json['AI Model'] }}",
                    "rightValue": "4"
                  }
                ]
              }
            },
            {
              "conditions": {
                "options": {
                  "version": 2,
                  "leftValue": "",
                  "caseSensitive": true,
                  "typeValidation": "strict"
                },
                "combinator": "and",
                "conditions": [
                  {
                    "id": "634191cd-73c9-4335-987b-93e07ba7ab0f",
                    "operator": {
                      "name": "filter.operator.equals",
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json['AI Model'] }}",
                    "rightValue": "5"
                  }
                ]
              }
            }
          ]
        },
        "options": {}
      },
      "typeVersion": 3.2
    },
    {
      "id": "38a42944-835b-422c-b872-b20c8f899210",
      "name": "Interroger Qwen3-next-80b-a3b-thinking (Alibaba)",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        304,
        -832
      ],
      "parameters": {
        "url": "https://integrate.api.nvidia.com/v1/chat/completions",
        "method": "POST",
        "options": {},
        "jsonBody": "={\n  \"model\": \"qwen/qwen3-next-80b-a3b-thinking\",\n  \"messages\": [\n    {\n      \"role\": \"user\",\n      \"content\": \"{{ $('On form submission').item.json['Insert your Query'] }}\"\n    }\n  ],\n  \"temperature\": 0.7,\n  \"max_tokens\": 1024\n} ",
        "sendBody": true,
        "sendHeaders": true,
        "specifyBody": "json",
        "authentication": "genericCredentialType",
        "headerParameters": {
          "parameters": [
            {
              "name": "accept",
              "value": "application/json"
            }
          ]
        }
      },
      "credentials": {
        "httpBearerAuth": {
          "id": "AM38cMMgmt5pCa3J",
          "name": "Bearer YOUR_TOKEN_HERE"
        }
      },
      "typeVersion": 4.2
    },
    {
      "id": "0d948f27-f325-4776-88f5-17993c22f382",
      "name": "Interroger Bytedance/seed-oss-36b-instruct (Bytedance)",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        304,
        -640
      ],
      "parameters": {
        "url": "https://integrate.api.nvidia.com/v1/chat/completions",
        "method": "POST",
        "options": {},
        "jsonBody": "={\n  \"model\": \"bytedance/seed-oss-36b-instruct\",\n  \"messages\": [\n    {\n      \"role\": \"user\",\n      \"content\": \"{{ $json['Insert your Query'] }}\"\n    }\n  ],\n  \"temperature\": 1.1,\n  \"top_p\": 0.95,\n  \"max_tokens\": 4096,\n  \"thinking_budget\": -1,\n  \"frequency_penalty\": 0,\n  \"presence_penalty\": 0,\n  \"stream\": false\n}",
        "sendBody": true,
        "specifyBody": "json",
        "authentication": "genericCredentialType"
      },
      "credentials": {
        "httpBearerAuth": {
          "id": "81rXxn13x9fyoYSK",
          "name": "Bearer YOUR_TOKEN_HERE Nvidia_bytedance/seed-oss-36b-instruct"
        }
      },
      "typeVersion": 4.2
    },
    {
      "id": "8fb1c1df-6544-4275-af67-c7f85b9fed92",
      "name": "Interroger Nvidia-nemotron-nano-9b-v2 (Nvidia)",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        304,
        -256
      ],
      "parameters": {
        "url": "https://integrate.api.nvidia.com/v1/chat/completions",
        "method": "POST",
        "options": {},
        "jsonBody": "{\n  \"model\": \"nvidia/nvidia-nemotron-nano-9b-v2\",\n  \"messages\": [\n    {\n      \"role\": \"system\",\n      \"content\": \"/think\"\n    }\n  ],\n  \"temperature\": 0.6,\n  \"top_p\": 0.95,\n  \"max_tokens\": 2048,\n  \"min_thinking_tokens\": 1024,\n  \"max_thinking_tokens\": 2048,\n  \"frequency_penalty\": 0,\n  \"presence_penalty\": 0,\n  \"stream\": true\n}",
        "sendBody": true,
        "sendHeaders": true,
        "specifyBody": "json",
        "authentication": "genericCredentialType",
        "headerParameters": {
          "parameters": [
            {}
          ]
        }
      },
      "credentials": {
        "httpBearerAuth": {
          "id": "De0YbIT8HKmoZ2QW",
          "name": "Bearer YOUR_TOKEN_HERE"
        }
      },
      "typeVersion": 4.2
    },
    {
      "id": "d0e9668b-1c75-4e41-90ec-684abeae0d49",
      "name": "Interroger DeepSeekv3_1",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        304,
        -432
      ],
      "parameters": {
        "url": "https://integrate.api.nvidia.com/v1/chat/completions",
        "method": "POST",
        "options": {},
        "jsonBody": "={\n  \"model\": \"deepseek-ai/deepseek-r1\",\n  \"messages\": [\n    {\n      \"role\": \"user\",\n      \"content\": \"{{ $('On form submission').item.json['Insert your Query'] }}\"\n    }\n  ],\n  \"temperature\": 0.6,\n  \"top_p\": 0.7,\n  \"frequency_penalty\": 0,\n  \"presence_penalty\": 0,\n  \"max_tokens\": 4096,\n  \"stream\": true\n} ",
        "sendBody": true,
        "sendHeaders": true,
        "specifyBody": "json",
        "authentication": "genericCredentialType",
        "headerParameters": {
          "parameters": [
            {
              "name": "Accept",
              "value": "application/json"
            }
          ]
        }
      },
      "credentials": {
        "httpBearerAuth": {
          "id": "C39RW210A9LPDPUu",
          "name": "Bearer YOUR_TOKEN_HERE Nvidia_Deepseekv31"
        }
      },
      "typeVersion": 4.2
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "versionId": "34faee65-7df2-4012-93bf-50660415c2d2",
  "connections": {
    "0b86c542-74ce-4456-b025-07025e6f57a7": {
      "main": [
        [
          {
            "node": "8a0ca7d2-f4c0-4a95-9a7a-63c9d40ef77e",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "556f837e-5958-4121-9142-f3a05b560190": {
      "main": [
        [
          {
            "node": "38a42944-835b-422c-b872-b20c8f899210",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "0d948f27-f325-4776-88f5-17993c22f382",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "d0e9668b-1c75-4e41-90ec-684abeae0d49",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "8fb1c1df-6544-4275-af67-c7f85b9fed92",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "38a42944-835b-422c-b872-b20c8f899210",
            "type": "main",
            "index": 0
          },
          {
            "node": "0d948f27-f325-4776-88f5-17993c22f382",
            "type": "main",
            "index": 0
          },
          {
            "node": "8fb1c1df-6544-4275-af67-c7f85b9fed92",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "8a0ca7d2-f4c0-4a95-9a7a-63c9d40ef77e": {
      "main": [
        [
          {
            "node": "20e9c15e-cd3d-4624-8620-5e100081bab1",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "2fd77eab-0817-4d39-a206-4506b5373765": {
      "main": [
        [
          {
            "node": "556f837e-5958-4121-9142-f3a05b560190",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "d0e9668b-1c75-4e41-90ec-684abeae0d49": {
      "main": [
        [
          {
            "node": "0b86c542-74ce-4456-b025-07025e6f57a7",
            "type": "main",
            "index": 2
          }
        ]
      ]
    },
    "8fb1c1df-6544-4275-af67-c7f85b9fed92": {
      "main": [
        [
          {
            "node": "0b86c542-74ce-4456-b025-07025e6f57a7",
            "type": "main",
            "index": 3
          }
        ]
      ]
    },
    "38a42944-835b-422c-b872-b20c8f899210": {
      "main": [
        [
          {
            "node": "0b86c542-74ce-4456-b025-07025e6f57a7",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "0d948f27-f325-4776-88f5-17993c22f382": {
      "main": [
        [
          {
            "node": "0b86c542-74ce-4456-b025-07025e6f57a7",
            "type": "main",
            "index": 1
          }
        ]
      ]
    }
  }
}
Foire aux questions

Comment utiliser ce workflow ?

Copiez le code de configuration JSON ci-dessus, créez un nouveau workflow dans votre instance n8n et sélectionnez "Importer depuis le JSON", collez la configuration et modifiez les paramètres d'authentification selon vos besoins.

Dans quelles scénarios ce workflow est-il adapté ?

Intermédiaire

Est-ce payant ?

Ce workflow est entièrement gratuit et peut être utilisé directement. Veuillez noter que les services tiers utilisés dans le workflow (comme l'API OpenAI) peuvent nécessiter un paiement de votre part.

Informations sur le workflow
Niveau de difficulté
Intermédiaire
Nombre de nœuds11
Catégorie-
Types de nœuds7
Description de la difficulté

Adapté aux utilisateurs expérimentés, avec des workflows de complexité moyenne contenant 6-15 nœuds

Auteur
Cheng Siong Chin

Cheng Siong Chin

@cschin

Prof. Cheng Siong CHIN serves as Chair Professor in Intelligent Systems Modelling and Simulation in Newcastle University, Singapore. His academic credentials include an M.Sc. in Advanced Control and Systems Engineering from The University of Manchester and a Ph.D. in Robotics from Nanyang Technological University.

Liens externes
Voir sur n8n.io

Partager ce workflow

Catégories

Catégories: 34