컨텍스트 AI 재정렬기로 MCP 서버를 정렬
고급
이것은Miscellaneous, AI RAG, Multimodal AI분야의자동화 워크플로우로, 16개의 노드를 포함합니다.주로 If, Code, Merge, HttpRequest, Chat 등의 노드를 사용하며. OpenAI GPT-4.1과 컨텍스트 AI를 사용하여 동적 MCP 서버 선택 구현
사전 요구사항
- •대상 API의 인증 정보가 필요할 수 있음
- •OpenAI API Key
워크플로우 미리보기
노드 연결 관계를 시각적으로 표시하며, 확대/축소 및 이동을 지원합니다
워크플로우 내보내기
다음 JSON 구성을 복사하여 n8n에 가져오면 이 워크플로우를 사용할 수 있습니다
{
"id": "d1iK84AVOBn7nPRx",
"meta": {
"instanceId": "11121a0a0c6d26991d417aaff350a8e1836bf48496a817dba8b2be23aec9b053",
"templateCredsSetupCompleted": true
},
"name": "Rank MCP Servers using Contextual AI Reranker",
"tags": [],
"nodes": [
{
"id": "59b497fe-1934-4183-8a17-f3b30ca0f5c4",
"name": "OpenAI 채팅 모델",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [
216,
-56
],
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-4.1-mini"
},
"options": {
"responseFormat": "json_object"
}
},
"credentials": {
"openAiApi": {
"id": "1qWYthUxPflxQXam",
"name": "OpenAi account"
}
},
"typeVersion": 1.2
},
{
"id": "a1c8a119-9b23-44ad-a1c0-2acef910beaf",
"name": "If",
"type": "n8n-nodes-base.if",
"position": [
496,
-280
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "47fd1d36-7a24-4086-9b68-ba5b42d9a714",
"operator": {
"type": "boolean",
"operation": "true",
"singleValue": true
},
"leftValue": "={{ $json.output.parseJson().use_mcp }}",
"rightValue": ""
}
]
}
},
"typeVersion": 2.2
},
{
"id": "3cfcff90-fdee-430a-951a-d30f8f487a6e",
"name": "병합",
"type": "n8n-nodes-base.merge",
"position": [
944,
-352
],
"parameters": {},
"typeVersion": 3.2
},
{
"id": "33cdc727-eaee-4898-b583-ec57c79362af",
"name": "병합1",
"type": "n8n-nodes-base.merge",
"position": [
1616,
-352
],
"parameters": {},
"typeVersion": 3.2
},
{
"id": "07450849-96b2-40a7-a9d1-5e1925d76f6c",
"name": "메모",
"type": "n8n-nodes-base.stickyNote",
"position": [
-624,
-528
],
"parameters": {
"width": 480,
"height": 1152,
"content": "# Dynamic MCP Selection\n## PROBLEM\nThousands of MCP Servers exist and many are updated daily, making server selection difficult for LLMs.\n- Current approaches require manually downloading and configuring servers, limiting flexibility.\n- When multiple servers are pre-configured, LLMs get overwhelmed and confused about which server to use for specific tasks.\n\n### This template enables dynamic server selection from a live PulseMCP directory of 5000+ servers.\n\n## How it works\n- A user query goes to an LLM that decides whether to use MCP servers to fulfill a given query and provides reasoning for its decision.\n- Next, we fetch MCP Servers from Pulse MCP API and format them as documents for reranking\n- Now, we use Contextual AI's Reranker to score and rank all MCP Servers based on our query and instructions\n\n## How to set up\n- Sign up for a free trial of Contextual AI [here](https://app.contextual.ai/) to find CONTEXTUALAI_API_KEY.\n- Click on variables option in left panel and add a new environment variable CONTEXTUALAI_API_KEY.\n- For the baseline model, we have used GPT 4.1 mini, you can find your OpenAI API key[ here](https://platform.openai.com/api-keys)\n\n## How to customize the workflow\n- We use chat trigger to initate the workflow. Feel free to replace it with a webhook or other trigger as required.\n- We use OpenAI's GPT 4.1 mini as the baseline model and reranker prompt generator. You can swap out this section to use the LLM of your choice.\n- We fetch 5000 MCP Servers from the PulseMCP directory as a baseline number, feel free to adjust this parameter as required.\n- We are using Contextual AI's ctxl-rerank-v2-instruct-multilingual reranker model, which can be swapped with any one of the following rerankers: \n 1) ctxl-rerank-v2-instruct-multilingual\n 2) ctxl-rerank-v2-instruct-multilingual-mini\n 3) ctxl-rerank-v1-instruct\n- You can checkout this [blog](https://contextual.ai/blog/introducing-instruction-following-reranker/) for more information about rerankers to make informed choice.\n- If you have feedback or need support, please email reranker-feedback@contextual.ai"
},
"typeVersion": 1
},
{
"id": "4fc2caf6-ba03-4507-82f9-3b88d0460e57",
"name": "메모1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-96,
-520
],
"parameters": {
"color": 7,
"width": 704,
"height": 608,
"content": "## 1. Determine whether MCP servers are needed\nBased on user's request, LLM determines the need for an MCP Server, provides a reason, and if needed, provides reranking instruction text which will be passed to reranker"
},
"typeVersion": 1
},
{
"id": "37386e9a-6051-4ef9-9e46-cbd4c60c7f80",
"name": "메모2",
"type": "n8n-nodes-base.stickyNote",
"position": [
672,
-520
],
"parameters": {
"color": 7,
"width": 640,
"height": 400,
"content": "## 2. Fetch MCP Server list and format them\nWe fetch 5000 MCP Servers from PulseMCP directory and parse them as documents to pass it onto the Contextual AI Reranker"
},
"typeVersion": 1
},
{
"id": "eef73a4d-eb47-4d2d-a7a9-44650e5ffc6b",
"name": "메모3",
"type": "n8n-nodes-base.stickyNote",
"position": [
1368,
-520
],
"parameters": {
"color": 7,
"width": 816,
"height": 400,
"content": "## 3. Rerank the servers and display top five results\nWe use Contextual AI's reranker to re-rank the servers and identify the top 5 servers based ont eh user query and re-ranker instruction, which is then formatted to be displayed in user friendly format.\n- You can checkout this [blog](https://contextual.ai/blog/introducing-instruction-following-reranker/) to learn more about rerankers"
},
"typeVersion": 1
},
{
"id": "b82d5e55-3ff9-4fd9-a37c-fc75c155353e",
"name": "User-Query",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"position": [
-80,
-280
],
"webhookId": "018048be-810b-4a22-82c4-9e7ed7f05e1a",
"parameters": {
"public": true,
"options": {
"responseMode": "responseNodes",
"allowFileUploads": true
},
"initialMessages": "Try MCP Reranker using Contextual AI's Reranker v2"
},
"typeVersion": 1.3
},
{
"id": "04a2eb05-a82b-4a86-a18d-ed01094ba638",
"name": "LLM 에이전트 for Decision-Making",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
144,
-280
],
"parameters": {
"options": {
"systemMessage": "=Analyze this user query and decide if it requires external tools/APIs (Model Context Protocol (MCP) servers) or can be answered directly.\n Query: \"{{ $json.chatInput }}\"\n\n Consider:\n - Does it need real-time data, web search, or external APIs?\n - Does it require specialized tools (file management, databases, etc.)?\n - Is it a complex task that would benefit from external services?\n - Can it be answered with general knowledge alone?\n\n If MCP is needed, also generate a concise reranking instruction for selecting the best external tools/APIs (MCPs) for this query.\n\n The instruction should:\n - Specify the exact capabilities/features/details that an MCP server requires for this query\n - Look for domain/field specificity and functionality needs\n - Any specific requirements that the user asks for\n - Highlight the user's prioritized criteria for server selection\n\n Base the instruction only on what is explicitly stated or clearly implied in the user's query.\n Do not assume additional requirements or preferences that are not present in the query.\n\n Respond with JSON: {\"use_mcp\": true/false, \"reason\": \"brief explanation\", \"instruction\": \"reranking instruction text or null if not needed\"}"
}
},
"typeVersion": 2.2
},
{
"id": "1cfbc30b-68ef-402f-a8ad-2aad77789d08",
"name": "PulseMCP Fetch MCP Servers",
"type": "n8n-nodes-base.httpRequest",
"position": [
720,
-280
],
"parameters": {
"url": "=https://api.pulsemcp.com/v0beta/servers",
"options": {},
"sendQuery": true,
"queryParameters": {
"parameters": [
{
"name": "count_per_page",
"value": "5000"
},
{
"name": "offset",
"value": "0"
}
]
}
},
"typeVersion": 4.2
},
{
"id": "955343c1-540a-460b-a27f-84d2da2da40a",
"name": "Final Response1",
"type": "@n8n/n8n-nodes-langchain.chat",
"position": [
720,
-88
],
"parameters": {
"message": "= {{ $json.output.parseJson().reason }} Therefore, no MCP Servers are required to fulfill this request.",
"options": {},
"waitUserReply": false
},
"typeVersion": 1
},
{
"id": "a788876e-4bc7-4f6e-82aa-8617ba99cdc9",
"name": "Parse MCP Server list into documents w metadata",
"type": "n8n-nodes-base.code",
"position": [
1168,
-352
],
"parameters": {
"jsCode": "const servers = $input.first().json.servers || [];\nconst documents = [];\nconst metadata = [];\n\nfor (const server of servers) {\n documents.push(`MCP Server: ${server.name}\\nDescription: ${server.short_description}`);\n metadata.push(`Name: ${server.name}, Stars: ${server.github_stars}, Downloads: ${server.package_download_count}`);\n}\n\nconst aiOutputRaw = $('LLM Agent for Decision-Making').first().json.output;\nconst aiOutput = JSON.parse(aiOutputRaw);\n\nreturn [{\n json: {\n query: $('User-Query').first().json.chatInput,\n instruction: aiOutput.instruction, \n documents,\n metadata,\n servers\n }\n}];\n"
},
"typeVersion": 2
},
{
"id": "0b49e518-d9b6-4865-9cd4-658bb7317927",
"name": "ContextualAI Reranker",
"type": "n8n-nodes-base.httpRequest",
"position": [
1392,
-280
],
"parameters": {
"url": "https://api.contextual.ai/v1/rerank",
"method": "POST",
"options": {},
"sendBody": true,
"sendHeaders": true,
"bodyParameters": {
"parameters": [
{
"name": "query",
"value": "={{ $json.query }}"
},
{
"name": "instruction",
"value": "={{ $json.instruction }}"
},
{
"name": "documents",
"value": "={{ $json.documents }}"
},
{
"name": "metadata",
"value": "={{ $json.metadata }}"
},
{
"name": "model",
"value": "ctxl-rerank-v2-instruct-multilingual"
}
]
},
"headerParameters": {
"parameters": [
{
"name": "Authorization",
"value": "=Bearer {{$vars.CONTEXTUALAI_API_KEY}}"
},
{
"name": "Content-type",
"value": "application/json"
}
]
}
},
"typeVersion": 4.2
},
{
"id": "30cf71cc-d8cb-44af-aaab-4fd9ae0bceb5",
"name": "Format the top 5 results",
"type": "n8n-nodes-base.code",
"position": [
1840,
-352
],
"parameters": {
"jsCode": "const results = $input.first().json.results || [];\nconst servers = $('Parse MCP Server list into documents w metadata').first().json.servers || [];\n\nconst top = results.slice(0, 5).map((r, i) => {\n const server = servers[r.index] || {};\n return {\n name: server.name || \"Unknown\",\n description: server.short_description || \"N/A\",\n stars: server.github_stars || 0,\n downloads: server.package_download_count || 0,\n score: r.relevance_score\n };\n});\n\nlet message = \"Top MCP Servers \\n\\n\";\ntop.forEach((s, i) => {\n message += `${i + 1}. ${s.name} (⭐ ${s.stars}, ⬇️ ${s.downloads}, 🔎 ${s.score.toFixed(2)})\\n ${s.description}\\n\\n`;\n});\n\nreturn [{ json: { message } }];\n"
},
"typeVersion": 2
},
{
"id": "395b94c6-bba5-4585-bbf8-e3272699c2ac",
"name": "Final Response2",
"type": "@n8n/n8n-nodes-langchain.chat",
"position": [
2064,
-352
],
"parameters": {
"message": "={{ $json.message }}",
"options": {},
"waitUserReply": false
},
"typeVersion": 1
}
],
"active": true,
"pinData": {},
"settings": {
"callerPolicy": "workflowsFromSameOwner",
"executionOrder": "v1"
},
"versionId": "4fd9aecc-d9c0-4efd-87c7-3385c810fc75",
"connections": {
"a1c8a119-9b23-44ad-a1c0-2acef910beaf": {
"main": [
[
{
"node": "1cfbc30b-68ef-402f-a8ad-2aad77789d08",
"type": "main",
"index": 0
},
{
"node": "Merge",
"type": "main",
"index": 1
}
],
[
{
"node": "955343c1-540a-460b-a27f-84d2da2da40a",
"type": "main",
"index": 0
}
]
]
},
"Merge": {
"main": [
[
{
"node": "a788876e-4bc7-4f6e-82aa-8617ba99cdc9",
"type": "main",
"index": 0
}
]
]
},
"Merge1": {
"main": [
[
{
"node": "30cf71cc-d8cb-44af-aaab-4fd9ae0bceb5",
"type": "main",
"index": 0
}
]
]
},
"b82d5e55-3ff9-4fd9-a37c-fc75c155353e": {
"main": [
[
{
"node": "LLM Agent for Decision-Making",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "LLM Agent for Decision-Making",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"0b49e518-d9b6-4865-9cd4-658bb7317927": {
"main": [
[
{
"node": "Merge1",
"type": "main",
"index": 0
}
]
]
},
"30cf71cc-d8cb-44af-aaab-4fd9ae0bceb5": {
"main": [
[
{
"node": "395b94c6-bba5-4585-bbf8-e3272699c2ac",
"type": "main",
"index": 0
}
]
]
},
"1cfbc30b-68ef-402f-a8ad-2aad77789d08": {
"main": [
[
{
"node": "Merge",
"type": "main",
"index": 0
}
]
]
},
"LLM Agent for Decision-Making": {
"main": [
[
{
"node": "a1c8a119-9b23-44ad-a1c0-2acef910beaf",
"type": "main",
"index": 0
}
]
]
},
"a788876e-4bc7-4f6e-82aa-8617ba99cdc9": {
"main": [
[
{
"node": "0b49e518-d9b6-4865-9cd4-658bb7317927",
"type": "main",
"index": 0
},
{
"node": "Merge1",
"type": "main",
"index": 1
}
]
]
}
}
}자주 묻는 질문
이 워크플로우를 어떻게 사용하나요?
위의 JSON 구성 코드를 복사하여 n8n 인스턴스에서 새 워크플로우를 생성하고 "JSON에서 가져오기"를 선택한 후, 구성을 붙여넣고 필요에 따라 인증 설정을 수정하세요.
이 워크플로우는 어떤 시나리오에 적합한가요?
고급 - 기타, AI RAG, 멀티모달 AI
유료인가요?
이 워크플로우는 완전히 무료이며 직접 가져와 사용할 수 있습니다. 다만, 워크플로우에서 사용하는 타사 서비스(예: OpenAI API)는 사용자 직접 비용을 지불해야 할 수 있습니다.
관련 워크플로우 추천
부동산 검색기 크롤러 도우미
PropertyFinder.ae, OpenRouter 및 SerpAPI를 사용하여 AI로 부동산 질문에 답변
If
Set
Code
+
If
Set
Code
18 노드George Zargaryan
기타
PDF에서 주문으로
AI를 사용한 PDF 구매 주문서를 Adobe Commerce 판매 주문서로 자동 변환
If
Set
Code
+
If
Set
Code
96 노드JKingma
문서 추출
5 AI 채팅 자동화를 사용하여 Instagram 슬라이드 쇼
AI와 Blotato를 사용하여 5개 플랫폼에서 소셜 미디어 슬라이드 쇼를 생성하고 게시
If
Wait
Merge
+
If
Wait
Merge
29 노드Sabrina Ramonov 🍄
기타
컨텍스트 혼합 RAG AI 콘텐츠
Google Drive에서 Supabase 상황 벡터 데이터베이스로 동기화, RAG 애플리케이션 사용
If
Set
Code
+
If
Set
Code
76 노드Michael Taleb
AI RAG
✨🩷자동화소셜 미디어콘텐츠게시工厂 + 系统提示组合
基于动态系统提示및GPT-4o의AI驱动多平台소셜 미디어콘텐츠工厂
If
Set
Code
+
If
Set
Code
100 노드Amit Mehta
콘텐츠 제작
OpenAI 임베딩을 사용한 BigQuery RAG
BigQuery RAG 및 OpenAI를 사용하여 문서 관련 질문에 답변
Set
Http Request
Agent
+
Set
Http Request
Agent
24 노드Dataki
기타