Skip to content

OCIO y TECnología

  • Privacy Policy
Offcanvas

  • Register
  • Lost your password ?

OCIO y TECnología

  • Home » 
  • Tecnología » 
  • AI » 
  • Ollama, run your own ChatGPT

Ollama, run your own ChatGPT

Emilio González Montaña 2024/01/06 0

With Ollama, you can locally run any available open-source LLM (Large Language Model), and it’s compatible with Windows, macOS, and Linux, including as a Docker container.

Table of Contents

  • Download & install
  • Run Ollama
    • Requirements
    • Check version & help
    • ollama help
    • Running llama2 model
    • Ask anything!
  • Gime me more! (models)
  • Conclusions

Download & install

First go to Ollama downloads page and select your OS:

Once downloaded follow installation instructions, with macOS just double click and drag application to Applications folder.

Run Ollama

Run Ollama requires to pull (download) a model.

This is quite simple with ollama, in deed they achieve the same as docker did with containers but with LLMs (models).

Requirements

Any computer with at least 4 cores & 16 GB of RAM will make the deal, but if no GPU is present and more RAM, and CPU… it will be slow, damn slow in deed. But you can try at least the power of ChatGPT in your setup! (insert evil smile here).

Check version & help

We could check current ollama version installed to check all was correctly installed:

ollama --version

We could also get commands help with:

ollama help

Running llama2 model

First time you run a model it will be automatically pulled.

We will try the most well known model (at the time of this article) llama2 model:

ollama run llama2

Ask anything!

Once you are running a model you can ask anything:

Of course it has state, so you can ask remembering previous questions and answers, for example:

Gime me more! (models)

So you want more stuff, no problem, there hundreds (or maybe thousands now) of models (LLMs) available to install.

At Ollama models library page you can explore all available models:

To run any of them just run as before but with the new model name.

Conclusions

Well it’s ChatGPT, quick answer no, unless you have a very good & expensive PC (16 cores, 64GB of RAM, and a 3000 Nvidia series or greater GPU with a lot of dedicated RAM).

Is this article written by Ollama, no (trust me).

Comparte esto:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
Tags : Tags ChatGPT   CPU   docker   GPU   Linux   llama2   LLM   macOS   Nvidia   Ollama   RAM   Windows
Share
facebookShare on FacebooktwitterShare on TwitterpinterestShare on Pinterest
linkedinShare on LinkedinvkShare on VkredditShare on ReddittumblrShare on TumblrviadeoShare on ViadeobufferShare on BufferpocketShare on PocketwhatsappShare on WhatsappviberShare on ViberemailShare on EmailskypeShare on SkypediggShare on DiggmyspaceShare on MyspacebloggerShare on Blogger YahooMailShare on Yahoo mailtelegramShare on TelegramMessengerShare on Facebook Messenger gmailShare on GmailamazonShare on AmazonSMSShare on SMS
Post navigation
Previous post

Cannot open Proxmox VM console

Next post

Disable Ubuntu Pro marketing on APT updates

Emilio González Montaña

Related Posts

Categories Linux  Networking  Tecnología Ollama, run your own ChatGPT

Detachable screens to avoid SSH disconnections

Categories Linux  Networking  Tecnología Ollama, run your own ChatGPT

Expose WSL2 port to the network

Categories Data bases  Docker  Tecnología Ollama, run your own ChatGPT

Recover corrupted Postgres DB WAL

Leave a Comment Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts

  • Detachable screens to avoid SSH disconnections
  • Expose WSL2 port to the network
  • Rosaleda (parque El Retiro, Madrid)
  • Back to the painting table
  • Recover corrupted Postgres DB WAL

Categories

  • Aficiones
  • AI
  • Bricolage
  • Data bases
  • Docker
  • Escenografía
  • Eventos
  • Fotografía
  • Hardware
  • Linux
  • Networking
  • Partidas
  • Proxmox
  • Sin categoría
  • Tecnología
  • Utils
  • Viajes
  • Virtualization
  • Warhammer
  • YAML

Tags

anthill (1) apt (5) cellular-automatons (1) ceph (2) context (1) conways-life (1) cortador (1) debian (3) docker (9) docker-compose.yml (4) docker-swarm (5) DRY (1) El Imperio (2) Enanos (2) escenografía (3) fotos (3) GlusterFS (1) Guerreros del Caos (2) informática (1) Linux (8) M.2 (1) Mac (1) Madrid (2) maqueta (2) MariaDB (2) miniaturas (4) MySQL (2) NFS (2) NVMe (1) partida (2) poliestireno (1) portainer (1) proxmox (2) rocas (1) Sony A65 (2) SSH (5) ssh-key (1) Ubuntu (8) update (2) volume (1) Warhammer (11) Windows (3) WSL (2) WSL2 (2) YAML (1)

Archives

  • June 2024
  • May 2024
  • April 2024
  • February 2024
  • January 2024
  • September 2023
  • August 2023
  • June 2023
  • December 2021
  • April 2021
  • May 2020
  • April 2020
  • March 2020
  • November 2019
  • July 2018
  • February 2017
  • June 2015
  • April 2014
  • April 2011
  • January 2011
  • July 2010
  • June 2010
  • March 2010
  • November 2009
  • June 2009
  • December 2008
  • November 2008
  • October 2008
  • June 2008
  • May 2008
  • October 2007

Meta

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Copyright © 2025 OCIO y TECnología - Powered by Nevothemes.
Offcanvas