I wanted to experiment a bit with an LLM and training it, so I decided to try a few things. I looked at a few tutorials (see the references below) and then finally got this working. This post distills down what I got to work.
Getting Started
I like containers, and rather than install something on my machine, I decided to get docker images for ollama. I ran this to get started:
docker pull ollama/ollama
From there, I pulled a few models into the container with these commands:
docker exec -it ollama ollama pull mistral
I then ran this to start the model and interact with it.
docker exec -it ollama ollama run mistral
Here are a first few things I typed in to test the model.
From here, I exited and then ran this to start the model in detached mode.
docker exec -d ollama ollama run mistral
From there, more experiments, but that’s in another article.
References
Here are some places and tutorials I looked at:
- https://mrash.co/python-ollama-file/
- https://medium.com/cyberark-engineering/how-to-run-llms-locally-with-ollama-cb00fa55d5de
- https://medium.com/@nsidana123/running-models-with-ollama-step-by-step-b3bdbfd91e8e
- https://github.com/dockersamples/codellama-python
- https://collabnix.com/how-to-run-open-source-llms-locally-with-ollama-and-docker-llama3-1-phi3-mistral-gemma2/
- https://www.timescale.com/blog/build-a-fully-local-rag-app-with-postgresql-mistral-and-ollama