๐ฆ๐๏ธ๐๏ธ ๋น๊ต๋ฅผ ํตํด ์ต์ ์ ํ์ง Ollama ์๊ฐ ๋ชจ๋ธ ์ฐพ๊ธฐ
์ด๊ฒ์AI๋ถ์ผ์์๋ํ ์ํฌํ๋ก์ฐ๋ก, 19๊ฐ์ ๋ ธ๋๋ฅผ ํฌํจํฉ๋๋ค.์ฃผ๋ก Set, SplitOut, GoogleDocs, GoogleDrive, HttpRequest ๋ฑ์ ๋ ธ๋๋ฅผ ์ฌ์ฉํ๋ฉฐ์ธ๊ณต์ง๋ฅ ๊ธฐ์ ์ ๊ฒฐํฉํ์ฌ ์ค๋งํธ ์๋ํ๋ฅผ ๊ตฌํํฉ๋๋ค. Google Docs๋ฅผ ์ฌ์ฉํ์ฌ ๋ก์ปฌ Ollama ์๊ฐ ๋ชจ๋ธ์ ๋น๊ตํ์ฌ ์ด๋ฏธ์ง ๋ถ์์ ์ํํฉ๋๋ค.
- โขGoogle Drive API ์ธ์ฆ ์ ๋ณด
- โข๋์ API์ ์ธ์ฆ ์ ๋ณด๊ฐ ํ์ํ ์ ์์
์ฌ์ฉ๋ ๋ ธ๋ (19)
์นดํ ๊ณ ๋ฆฌ
{
"id": "keFEBUqHOrsib60G",
"meta": {
"instanceId": "31e69f7f4a77bf465b805824e303232f0227212ae922d12133a0f96ffeab4fef",
"templateCredsSetupCompleted": true
},
"name": "๐ฆ๐๏ธ๐๏ธ Find the Best Local Ollama Vision Models by Comparison",
"tags": [],
"nodes": [
{
"id": "dd2f1201-a78a-4ea9-b5ff-7543673e8445",
"name": "์คํฐ์ปค ๋ฉ๋ชจ1",
"type": "n8n-nodes-base.stickyNote",
"position": [
1080,
1160
],
"parameters": {
"color": 4,
"width": 340,
"height": 340,
"content": "## ๐๏ธ Analyze Image with Local Ollama LLM\n"
},
"typeVersion": 1
},
{
"id": "81975be4-1e40-41e9-b938-612270f80a92",
"name": "์คํฐ์ปค ๋ฉ๋ชจ2",
"type": "n8n-nodes-base.stickyNote",
"position": [
280,
640
],
"parameters": {
"color": 4,
"width": 300,
"height": 300,
"content": "## ๐Try Me!"
},
"typeVersion": 1
},
{
"id": "3a56f75b-4836-4c37-a246-83ef6507c581",
"name": "'ํ
์คํธ ์ํฌํ๋ก์ฐ' ํด๋ฆญ ์",
"type": "n8n-nodes-base.manualTrigger",
"position": [
380,
740
],
"parameters": {},
"typeVersion": 1
},
{
"id": "bb4570c7-269c-4d28-85d4-183ca2fabb89",
"name": "Ollama LLM ์์ฒญ",
"type": "n8n-nodes-base.httpRequest",
"position": [
1200,
1280
],
"parameters": {
"url": "http://127.0.0.1:11434/api/chat",
"method": "POST",
"options": {},
"jsonBody": "={{ $json.body }}",
"sendBody": true,
"sendHeaders": true,
"specifyBody": "json",
"headerParameters": {
"parameters": [
{
"name": "Content-Type",
"value": " application/json"
}
]
}
},
"typeVersion": 4.2
},
{
"id": "0a6e064d-9a67-4cd2-b3ed-247a1684c1fb",
"name": "์์ฒญ ๋ณธ๋ฌธ ์์ฑ",
"type": "n8n-nodes-base.set",
"position": [
840,
1280
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "be9a8e21-9bb6-4588-a77a-61bc2def0648",
"name": "body",
"type": "string",
"value": "={\n \"model\": \"{{ $json.models }}\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"{{ $json.user_prompt }}\",\n \"images\": [\"{{ $('List of Vision Models').item.json.data }}\"]\n }\n ],\n \"stream\": false\n}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "f3119aff-62bc-4ffc-abe9-835dea105d76",
"name": "Ollama ๋ชจ๋ธ ์ํ ์ฒ๋ฆฌ",
"type": "n8n-nodes-base.splitInBatches",
"position": [
360,
1180
],
"parameters": {
"options": {}
},
"typeVersion": 3
},
{
"id": "1e26f493-881e-40a3-922d-5c8d6cb86374",
"name": "๊ฒฐ๊ณผ ๊ฐ์ฒด ์์ฑ",
"type": "n8n-nodes-base.set",
"position": [
620,
1080
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "780086e5-2733-435a-90b5-fd10946ddd7a",
"name": "result",
"type": "object",
"value": "={{ $json }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "ac0d3ada-8890-4945-aedb-fd6be4ffc020",
"name": "์ผ๋ฐ ์ด๋ฏธ์ง ํ๋กฌํํธ",
"type": "n8n-nodes-base.set",
"position": [
620,
1280
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "be9a8e21-9bb6-4588-a77a-61bc2def0648",
"name": "user_prompt",
"type": "string",
"value": "=Analyze this image in exhaustive detail using this structure:\\n\\n1. **Comprehensive Inventory**\\n- List all visible objects with descriptors (size, color, position)\\n- Group related items hierarchically (primary subject โ secondary elements โ background)\\n- Note object conditions (intact/broken, new/aged)\\n\\n2. **Contextual Analysis**\\n- Identify probable setting/location with supporting evidence\\n- Determine time period/season through visual cues\\n- Analyze lighting conditions and shadow orientation\\n\\n3. **Spatial Relationships**\\n- Map object positions using grid coordinates (front/center/back, left/right)\\n- Describe size comparisons between elements\\n- Note overlapping/occluded objects\\n\\n4. **Textual Elements**\\n- Extract ALL text with font characteristics\\n- Identify logos/brands with confidence estimates\\n- Translate non-native text with cultural context\\n\\nFormat response in markdown with clear section headers and bullet points."
}
]
},
"includeOtherFields": true
},
"typeVersion": 3.4
},
{
"id": "b7faafca-a179-43b2-8318-29b4659d424f",
"name": "๋ถ๋์ฐ ์คํ๋ ๋์ํธ ํ๋กฌํํธ",
"type": "n8n-nodes-base.set",
"disabled": true,
"position": [
620,
1480
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "be9a8e21-9bb6-4588-a77a-61bc2def0648",
"name": "user_prompt",
"type": "string",
"value": "=Analyze this spreadsheet image in exhaustive detail using this structure:\\n\\n1. **Table Structure**\\n- Identify all column headers (months) in order\\n- List all row labels exactly as shown\\n- Note any table titles, footnotes, or metadata\\n\\n2. **Data Extraction**\\n- Extract all numeric values with precise formatting (decimals, currency symbols)\\n- Maintain exact numbers for Listings, Sales, Months of Inventory\\n- Preserve currency formatting for Avg. Price values\\n- Include DOM values from separate section\\n\\n3. **Markdown Representation**\\n- Convert the entire spreadsheet into a perfectly formatted markdown table\\n- Maintain alignment of all columns and rows\\n- Preserve all relationships between data points\\n\\n4. **Data Analysis**\\n- Identify trends across months for each metric\\n- Note highest and lowest values in each category\\n- Calculate percentage changes between months where relevant\\n\\nFormat response with a complete markdown table first, followed by brief analysis of the real estate market data shown."
}
]
},
"includeOtherFields": true
},
"typeVersion": 3.4
},
{
"id": "5c28053b-a44e-494b-ad03-27d5b217f6b3",
"name": "๋น์ ๋ชจ๋ธ ๋ชฉ๋ก",
"type": "n8n-nodes-base.set",
"position": [
1440,
740
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "86add667-cd96-4e1c-877a-c437f6b1e040",
"name": "models",
"type": "array",
"value": "=[\"granite3.2-vision\",\"llama3.2-vision\",\"gemma3:27b\"]"
}
]
},
"includeOtherFields": true
},
"typeVersion": 3.4
},
{
"id": "58c2fcd2-0ac4-4684-a30f-37650cc8dac1",
"name": "Base64 ๋ฌธ์์ด ๊ฐ์ ธ์ค๊ธฐ",
"type": "n8n-nodes-base.extractFromFile",
"position": [
1140,
740
],
"parameters": {
"options": {},
"operation": "binaryToPropery"
},
"typeVersion": 1
},
{
"id": "b60c0589-397c-445b-a084-a791bef95b15",
"name": "Google ๋๋ผ์ด๋ธ์์ ์ด๋ฏธ์ง ํ์ผ ๋ค์ด๋ก๋",
"type": "n8n-nodes-base.googleDrive",
"position": [
920,
740
],
"parameters": {
"fileId": {
"__rl": true,
"mode": "id",
"value": "={{ $json.id }}"
},
"options": {},
"operation": "download"
},
"credentials": {
"googleDriveOAuth2Api": {
"id": "UhdXGYLTAJbsa0xX",
"name": "Google Drive account"
}
},
"typeVersion": 3
},
{
"id": "55a8f511-fdb5-4830-837a-104cbf6c6167",
"name": "์ํ ์ฒ๋ฆฌ๋ฅผ ์ํ ๋น์ ๋ชจ๋ธ ๋ชฉ๋ก ๋ถํ ",
"type": "n8n-nodes-base.splitOut",
"position": [
1640,
740
],
"parameters": {
"options": {},
"fieldToSplitOut": "models"
},
"typeVersion": 1
},
{
"id": "8e48c8bd-15c9-4389-8698-77dc5ae698bc",
"name": "์คํฐ์ปค ๋ฉ๋ชจ",
"type": "n8n-nodes-base.stickyNote",
"position": [
620,
640
],
"parameters": {
"color": 7,
"width": 700,
"height": 300,
"content": "## โฌ๏ธDownload Image from Google Drive"
},
"typeVersion": 1
},
{
"id": "6ed8925d-b031-4052-9009-91e2e7d8f360",
"name": "์คํฐ์ปค ๋ฉ๋ชจ3",
"type": "n8n-nodes-base.stickyNote",
"position": [
1360,
640
],
"parameters": {
"color": 7,
"width": 460,
"height": 300,
"content": "## ๐Create List of Local Ollama Vision Models"
},
"typeVersion": 1
},
{
"id": "ae383e4f-21e6-479f-97e0-029f43dacc56",
"name": "์คํฐ์ปค ๋ฉ๋ชจ4",
"type": "n8n-nodes-base.stickyNote",
"position": [
280,
980
],
"parameters": {
"color": 7,
"width": 1200,
"height": 720,
"content": "## ๐ฆ๐๏ธ๐๏ธ Process Image with Ollama Vision Models and Save Results to Google Drive"
},
"typeVersion": 1
},
{
"id": "a27bcb6e-c6e8-4777-9887-428363256b4a",
"name": "Google ๋ฌธ์ ์ด๋ฏธ์ง ID",
"type": "n8n-nodes-base.set",
"position": [
700,
740
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "7d5a0385-4d8b-4f70-b3b0-4182bda29e5c",
"name": "id",
"type": "string",
"value": "=[your-google-id]"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "8e6114f8-c724-40fd-9be3-253e3cb882fa",
"name": "์ด๋ฏธ์ง ์ค๋ช
์ Google Docs์ ์ ์ฅ",
"type": "n8n-nodes-base.googleDocs",
"position": [
840,
1080
],
"parameters": {
"actionsUi": {
"actionFields": [
{
"text": "=<{{ $json.result.model }}>\n{{ $json.result.message.content }}\n</{{ $json.result.model }}>\n\n",
"action": "insert"
}
]
},
"operation": "update",
"documentURL": "[your-google-doc-id]"
},
"credentials": {
"googleDocsOAuth2Api": {
"id": "YWEHuG28zOt532MQ",
"name": "Google Docs account"
}
},
"typeVersion": 2
},
{
"id": "abed9af8-0d50-413a-9e6d-c6100ddaf015",
"name": "์คํฐ์ปค ๋ฉ๋ชจ5",
"type": "n8n-nodes-base.stickyNote",
"position": [
-240,
640
],
"parameters": {
"width": 480,
"height": 1340,
"content": "## ๐ฆ๐๏ธ๐๏ธ Find the Best Local Ollama Vision Models for Your Use Case\n\nProcess images using locally hosted Ollama Vision Models to extract detailed descriptions, contextual insights, and structured data. Save results directly to Google Docs for efficient collaboration.\n\n### Who is this for?\nThis workflow is ideal for developers, data analysts, and AI enthusiasts who need to process and analyze images using locally hosted Ollama Vision Language Models. Itโs particularly useful for tasks requiring detailed image descriptions, contextual analysis, and structured data extraction.\n\n### What problem is this workflow solving? / Use Case\nThe workflow solves the challenge of extracting meaningful insights from images in exhaustive detail, such as identifying objects, analyzing spatial relationships, extracting textual elements, and providing contextual information. This is especially helpful for applications in real estate, marketing, engineering, and research.\n\n### What this workflow does\nThis workflow:\n1. Downloads an image file from Google Drive.\n2. Processes the image using multiple Ollama Vision Models (e.g., Granite3.2-Vision, Llama3.2-Vision).\n3. Generates detailed markdown-based descriptions of the image.\n4. Saves the output to a Google Docs file for easy sharing and further analysis.\n\n### Setup\n1. Ensure you have access to a local instance of Ollama. https://ollama.com/\n2. Pull the Ollama vision models.\n3. Configure your Google Drive and Google Docs credentials in n8n.\n4. Provide the image file ID from Google Drive in the designated node.\n5. Update the list of Ollama vision models\n6. Test the workflow by clicking โTest Workflowโ to trigger the process.\n\n### How to customize this workflow to your needs\n- Replace the image source with another provider if needed (e.g., AWS S3 or Dropbox).\n- Modify the prompts in the \"General Image Prompt\" node to suit specific analysis requirements.\n- Add additional nodes for post-processing or integrating results into other platforms like Slack or HubSpot.\n\n## Key Features:\n- **Detailed Image Analysis**: Extracts comprehensive details about objects, spatial relationships, text elements, and contextual settings.\n- **Multi-Model Support**: Utilizes multiple vision models dynamically for optimal performance.\n- **Markdown Output**: Formats results in markdown for easy readability and documentation.\n- **Google Drive Integration**: Seamlessly downloads images and saves results to Google Docs.\n\n\n"
},
"typeVersion": 1
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "a337e019-1c9a-4736-8dcd-4f12a9d989f4",
"connections": {
"58c2fcd2-0ac4-4684-a30f-37650cc8dac1": {
"main": [
[
{
"node": "5c28053b-a44e-494b-ad03-27d5b217f6b3",
"type": "main",
"index": 0
}
]
]
},
"bb4570c7-269c-4d28-85d4-183ca2fabb89": {
"main": [
[
{
"node": "f3119aff-62bc-4ffc-abe9-835dea105d76",
"type": "main",
"index": 0
}
]
]
},
"0a6e064d-9a67-4cd2-b3ed-247a1684c1fb": {
"main": [
[
{
"node": "bb4570c7-269c-4d28-85d4-183ca2fabb89",
"type": "main",
"index": 0
}
]
]
},
"a27bcb6e-c6e8-4777-9887-428363256b4a": {
"main": [
[
{
"node": "b60c0589-397c-445b-a084-a791bef95b15",
"type": "main",
"index": 0
}
]
]
},
"ac0d3ada-8890-4945-aedb-fd6be4ffc020": {
"main": [
[
{
"node": "0a6e064d-9a67-4cd2-b3ed-247a1684c1fb",
"type": "main",
"index": 0
}
]
]
},
"1e26f493-881e-40a3-922d-5c8d6cb86374": {
"main": [
[
{
"node": "8e6114f8-c724-40fd-9be3-253e3cb882fa",
"type": "main",
"index": 0
}
]
]
},
"5c28053b-a44e-494b-ad03-27d5b217f6b3": {
"main": [
[
{
"node": "55a8f511-fdb5-4830-837a-104cbf6c6167",
"type": "main",
"index": 0
}
]
]
},
"f3119aff-62bc-4ffc-abe9-835dea105d76": {
"main": [
[
{
"node": "1e26f493-881e-40a3-922d-5c8d6cb86374",
"type": "main",
"index": 0
}
],
[
{
"node": "ac0d3ada-8890-4945-aedb-fd6be4ffc020",
"type": "main",
"index": 0
}
]
]
},
"b7faafca-a179-43b2-8318-29b4659d424f": {
"main": [
[]
]
},
"3a56f75b-4836-4c37-a246-83ef6507c581": {
"main": [
[
{
"node": "a27bcb6e-c6e8-4777-9887-428363256b4a",
"type": "main",
"index": 0
}
]
]
},
"b60c0589-397c-445b-a084-a791bef95b15": {
"main": [
[
{
"node": "58c2fcd2-0ac4-4684-a30f-37650cc8dac1",
"type": "main",
"index": 0
}
]
]
},
"55a8f511-fdb5-4830-837a-104cbf6c6167": {
"main": [
[
{
"node": "f3119aff-62bc-4ffc-abe9-835dea105d76",
"type": "main",
"index": 0
}
]
]
}
}
}์ด ์ํฌํ๋ก์ฐ๋ฅผ ์ด๋ป๊ฒ ์ฌ์ฉํ๋์?
์์ JSON ๊ตฌ์ฑ ์ฝ๋๋ฅผ ๋ณต์ฌํ์ฌ n8n ์ธ์คํด์ค์์ ์ ์ํฌํ๋ก์ฐ๋ฅผ ์์ฑํ๊ณ "JSON์์ ๊ฐ์ ธ์ค๊ธฐ"๋ฅผ ์ ํํ ํ, ๊ตฌ์ฑ์ ๋ถ์ฌ๋ฃ๊ณ ํ์์ ๋ฐ๋ผ ์ธ์ฆ ์ค์ ์ ์์ ํ์ธ์.
์ด ์ํฌํ๋ก์ฐ๋ ์ด๋ค ์๋๋ฆฌ์ค์ ์ ํฉํ๊ฐ์?
๊ณ ๊ธ - ์ธ๊ณต์ง๋ฅ
์ ๋ฃ์ธ๊ฐ์?
์ด ์ํฌํ๋ก์ฐ๋ ์์ ํ ๋ฌด๋ฃ์ด๋ฉฐ ์ง์ ๊ฐ์ ธ์ ์ฌ์ฉํ ์ ์์ต๋๋ค. ๋ค๋ง, ์ํฌํ๋ก์ฐ์์ ์ฌ์ฉํ๋ ํ์ฌ ์๋น์ค(์: OpenAI API)๋ ์ฌ์ฉ์ ์ง์ ๋น์ฉ์ ์ง๋ถํด์ผ ํ ์ ์์ต๋๋ค.
๊ด๋ จ ์ํฌํ๋ก์ฐ ์ถ์ฒ
Joseph LePage
@joeAs an AI Automation consultant based in Canada, I partner with forward-thinking organizations to implement AI solutions that streamline operations and drive growth.
์ด ์ํฌํ๋ก์ฐ ๊ณต์