Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

Use your own customized open-source Large Language Model
Latest   Machine Learning

Use your own customized open-source Large Language Model

Author(s): Taha Azizi

Originally published on Towards AI.

You’ve built it. Now unleash it.

Use your own customized open-source Large Language Model
Learn how to use your own fine-tuned model

You already fine-tuned a model (great!). Now it’s time to use it. Convert it to GGUF, quantize for local hardware, wrap it in an Ollama Modelfile, validate outputs, surface it so it starts producing real value. This step-by-step guide gives the exact commands, checks, test-suites, integration snippets, and practical tradeoffs so your model stops being a demo and starts solving problems.

Fine-tuning is the creation part — useful, but invisible unless you actually run and integrate the model. This guide turns your tuned checkpoint into something your team (or customers) can call, test, and improve.

Assumes: you have a fine-tuned Hugging Face Llama 3 (or compatible) model folder ready on disk. If you tuned it in my previous article, you’re 100% ready.

Quick checklist before we start

  • Enough disk space for the model.
  • python (3.9+), git, make for building tools.
  • llama.cpp cloned and built (for convert_hf_to_gguf.py and quantize). *(step by step guide in the appendix)
  • ollama installed and running.
Step by step guide how to deploy your fine-tune model locally

Step 1 — Convert your fine-tuned checkpoint to GGUF (f16)

Run the conversion script inside the llama.cpp repo. This produces an f16 GGUF — a high-fidelity representation we’ll quantize next.

# from inside the llama.cpp directory (where convert_hf_to_gguf.py lives)
python convert_hf_to_gguf.py /path/to/your-finetuned-hf-model \
--outfile model.gguf \
--outtype f16

Why f16 first? Converting to f16 preserves numeric precision so you can compare quality before/after quantization.

Step 2 — Quantize the GGUF for local hardware

Quantization makes models much smaller and faster at inference. Choose a mode depending on your hardware and quality needs.

# example: balanced CPU option
./quantize model.gguf model-q4_k_m.gguf q4_k_m

Other options & tradeoffs

  • q4_k_m: great CPU balance (speed + quality).
  • q4_0, q5_*: alternative settings — q5 often better for some GPU setups; q4_0 sometimes faster but lower quality.
  • If quality drops too much, keep the f16 version for critical uses.

Step 3 — Create an Ollama Modelfile (blueprint for runtime)

Put this Modelfile next to your model-q4_k_m.gguf. This tells Ollama where the model is, which chat template to use, and what the system persona should be.

Create a file named Modelfile (no extension):

# Chat template for many Llama 3 instruct-style models
TEMPLATE """<|begin_of_text|><|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{{ .Response }}<|eot_id|>"""SYSTEM """You are an expert at improving and refining image-generation prompts.
You transform short user ideas into clear, vivid, composition-aware prompts.
Ask clarifying questions for underspecified requests. Prefer concrete sensory details (lighting, color palettes, camera lenses, composition)."""
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|end_header_id|>"

Notes

  • If your fine-tuned model uses different prompt markers, adapt TEMPLATE.
  • The SYSTEM, a.k.a system prompt text is your single most effective lever to change tone/behavior.

Step 4 — Create the Ollama model

With Ollama running locally, create the model entry:

ollama create my-prompt-improver -f ./Modelfile

If successful, Ollama adds my-prompt-improver to your local model list.

Step 5 — Quick Interactive Validation

Run it interactively:

ollama run my-prompt-improver
# then type a prompt, e.g.
make this prompt better: a neon cyberpunk alley at midnight, rainy reflections, lone saxophone player

Alternatively, Ollama has a new UI and you can test quickly your model, first select the LLM you finetuned:

Now you can choose your own finetuned model in ollama UI

start using it (that simple):

My customized fine-tune model is generating amazing results!

Sanity checks

  1. Fidelity: Compare output from model.gguf (f16) and model-q4_k_m.gguf. If f16 looks much better, quantization degraded quality.
  2. Persona: Does it adopt the system voice? If not, tweak SYSTEM.

Step 6 — Batch test & compare (automated)

Run a suite of prompts through f16 and q4 models, save outputs for A/B comparison. Save this script as compare_models.py.

# compare_models.py
import csv, shlex, subprocess
from pathlib import Path

PROMPTS = [
"Sunset over a coastal village, cinematic, warm tones, 35mm lens",
"A cute corgi astronaut bouncing on the moon",
"Describe a dystopian future city in one paragraph, focus on smells and textures",
# add more prompts...
]
def run_model(model, prompt):
cmd = f"bash -c \"echo {shlex.quote(prompt)} | ollama run {shlex.quote(model)}\""
p = subprocess.run(cmd, shell=True, capture_output=True, text=True)
return p.stdout.strip()
def main():
out = Path("model_comparison.csv")
with out.open("w", newline='', encoding='utf-8') as fh:
writer = csv.writer(fh)
writer.writerow(["prompt", "model", "output"])
for prompt in PROMPTS:
for model in ["my-prompt-improver-f16", "my-prompt-improver-q4"]:
# replace with actual model names if different
o = run_model(model, prompt)
writer.writerow([prompt, model, o])
print("Wrote", out)
if __name__ == "__main__":
main()

How to use

  • Create two Ollama models pointing to model.gguf and model-q4_k_m.gguf respectively (e.g., my-prompt-improver-f16 and my-prompt-improver-q4) and run the script.
  • Manually review model_comparison.csv or run diffs to measure missing details.

Test suite (20–50 prompts)

Create a test suite 0f 20 to 50 prompts for functional testing (simple → complex → ambiguous → clarifying)

Step 7 — Integrate your model (example: small local API)

Programmatically, expose your model as a tiny local API that any app can call. This example shells to ollama run using piping (works without knowing internal Ollama HTTP API).

# run_api.py (FastAPI example)
from fastapi import FastAPI
from pydantic import BaseModel
import subprocess, shlex

app = FastAPI()
class Req(BaseModel):
prompt: str
def call_model(prompt: str, model: str = "my-prompt-improver"):
cmd = f"bash -c \"echo {shlex.quote(prompt)} | ollama run {shlex.quote(model)}\""
p = subprocess.run(cmd, shell=True, capture_output=True, text=True)
return p.stdout
@app.post("/generate")
def generate(req: Req):
out = call_model(req.prompt)
return {"output": out}

Production notes

  • In prod, use a process manager and limit concurrency.
  • Use authentication (JWT, API keys) around this endpoint.
  • Add caching for repeated prompts.
Now you can monitor the model performance and improve based on feedback

Wrap up: what to do after this guide

  1. Run the a 20-50 prompt test suite and compare f16 vs quantized outputs.
  2. Build a small FastAPI wrapper and integrate into one internal workflow.
  3. Gather user feedback and fine-tune again on corrections.

Appendix: Cloning and Building llama.cpp

To run convert_hf_to_gguf.py and quantize, you first need to clone and build llama.cpp from source. This gives you all the tools you need to prepare and optimize your model for local inference.

1-Install Prerequisites

Before cloning, make sure you have the necessary tools:

# Ubuntu/Debian
sudo apt update && sudo apt install -y build-essential python3 python3-pip git cmake
# macOS
brew install cmake python git
# Windows (PowerShell)
choco install cmake python git

2-Clone the Repository

Get the latest llama.cpp code from GitHub:

git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp

3-Build the Project

llama.cpp uses CMake for compilation. Run:

mkdir build
cd build
cmake ..
cmake --build . --config Release

After this step, you’ll have compiled binaries in your build folder, including the quantize tool.

4-Verify Your Build

Check that quantize is available:

./quantize --help

You should see usage instructions for the quantize tool.

5-Use the Python Scripts

The convert_hf_to_gguf.py script is located in the llama.cpp root directory. You can run it like this:

cd ..
python3 convert_hf_to_gguf.py \
--model /path/to/huggingface/model \
--outfile /path/to/output/model.gguf

Once converted, you can quantize the model:

./build/quantize model.gguf model.Q4_K_M.gguf Q4_K_M

Troubleshooting (fast)

  • Model doesn’t load in Ollama: Check FROM path in Modelfile, GGUF file name, and that Ollama version supports GGUF.
  • Quantization ruined quality: Run the f16 GGUF to compare. If f16 is good, try different quant modes (q5 or less aggressive).
  • Weird tokens / formatting: Adjust TEMPLATE to match the prompt markers your model expects.
  • Model asks irrelevant questions: Tweak SYSTEM prompt to be more directive.
  • High memory usage: Use more aggressive quantization or move to a machine with larger RAM.

How to measure success

  • Human rating: 5-point rubric (relevance, vividness, correctness, helpfulness, clarity).
  • Operational metrics: inference latency, CPU/GPU utilization, cost per inference.
  • Business metrics: support deflection rate, drafts produced per week, conversion lift.
  • A/B tests: put fine-tuned model vs frontier model behind the same UI, measure user engagement and task completion.

Security & licensing

  • Check model license on Hugging Face (some base models restrict commercial use).
  • Don’t expose sensitive data in logs; encrypt secrets and store models on secure disks.

Open-Source vs Frontier: the decision matrix

Short version: Use open-source when you must control data, reduce operating cost, or specialize heavily. Use frontier models when you need best-in-class general reasoning, multimodal glue, and zero ops.

Rule of thumb: If you plan large scale or need the best general reasoning → frontier. If you need privacy, cost control, or niche expertise → open-source.

Follow me for the next article or suggest me what you want to know more about.

GitHub Repository: https://github.com/Taha-azizi/finetune_imageprompt

All images were generate by AI tools by the author.

Now you’re ready to run your optimized model locally — lightweight, fast, and ready for production-like testing.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.