Blog Post

Setting up a Local LLM

,

I wanted to experiment a bit with an LLM and training it, so I decided to try a few things. I looked at a few tutorials (see the references below) and then finally got this working. This post distills down what I got to work.

Getting Started

I like containers, and rather than install something on my machine, I decided to get docker images for ollama. I ran this to get started:

docker pull ollama/ollama

From there, I pulled a few models into the container with these commands:

docker exec -it ollama ollama pull mistral

I then ran this to start the model and interact with it.

docker exec -it ollama ollama run mistral

Here are a first few things I typed in to test the model.

2025-01_0111

From here, I exited and then ran this to start the model in detached mode.

docker exec -d ollama ollama run mistral

From there, more experiments, but that’s in another article.

References

Here are some places and tutorials I looked at:

Original post (opens in new tab)
View comments in original post (opens in new tab)

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating