Ollama Setup Guide for Triplo AI
If you're a holder of a Triplo AI PRO license or subscription, you can now use Ollama to serve models to your Triplo AI. Please note that the Ollama integration is still in BETA, so expect changes over time. Follow the steps below to set it up.
Step 1: Install Ollama on Your Computer/Server
Windows Installation
- Download the executable file from the Ollama website.
- Run the downloaded executable file.
- Ollama will be installed automatically.
MacOS Installation
- Download the Ollama file from the Ollama website.
- Once the download completes, unzip the downloaded file.
- Drag the
Ollama.app
folder into your Applications folder.
Linux Installation
- Open your terminal.
- Run the following command: curl -fsSL https://ollama.com/install.sh | sh
Step 2: Select the Desired Models
Visit Ollama's library to browse and select the models you want to use. If you're not sure what model to use, pick a small one (like phi3:3.8b or gemma:2b)
Step 3: Download/Install the Selected Models
Run the following command in your terminal to download and install the selected models: ollama run %model_id%
Example: ollama run deepseek-r1 or ollama run mannix/phi3-mini-4k
Step 4: Quit Ollama
To quit Ollama, use the command: /bye
Step 5: Start the Ollama Service
Run the following command to start the Ollama service: ollama serve
You should see a return similar to the following:
Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8sTn5QJ3vRgXeMZQ68Whqj4uVfZlWnFJkz/aMrKQv Error: listen tcp 127.0.0.1:11434
Step 6: Copy the Address with the Port
In the example above, the address is 127.0.0.1:11434. Make sure to ADD http:// to the begin and copy this address.
Step 7: Paste it into Your Triplo AI Ollama Settings
Step 8: Paste the path/port of your Ollama Server
Step 9: Enable Your Ollama
You can now select all of the models available at your Ollama service directly in your Triplo AI.