Language: EN

como-usar-llama3-con-csharp-ollama

How to use LLaMA 3 with C# and Ollama Sharp

OllamaSharp is a .NET library that allows us to use LLM as LLaMA3 easily from a C# application.

LLaMA 3 is the latest version of the language model developed by Meta, designed to understand and generate text intelligently.

In theory, LLaMA 3 rivals or surpasses the benchmarks of the system such as ChatGPT, Copilot, or Gemini. I’m not going to say so categorically that it surpasses them (at least not in Spanish), but the truth is that it gives very good results. It’s pretty close.

On the other hand, Ollama is an open-source tool that simplifies the execution of large language models (LLMs) locally. These models include LLaMA 3,

Finally, we can use Ollama from a C# application very easily with OllamaSharp. OllamaSharp is a C# binding for the Ollama API, designed to facilitate interaction with Ollama using .NET languages.

With Ollama + LLaMA 3 and OllamaSharp, we can use LLaMA 3 in our applications with just a few lines of code, with support for different functionalities such as Completation or Streams. In short, it’s wonderful, let’s see how 👇

How to install Ollama and LlaMA3

First, we need to install Ollama on our computer. We simply go to the project page https://github.com/ollama/ollama, and download the appropriate installer for our operating system.

Once installed, from a console application we run

ollama run llama3

This will download the Llama 3 8B model to our computer. There are two versions 8B and 70B of token respectively. The first occupies around 5GB and the second around 40GB.

If your graphics card can load the file into memory, the model will run very fast, around 500 words / second. If you have to run it on CPU, it will go very slow, around 2-5 words per second.

A GPU 3060 can run the 5GB one without any problem. The 40GB one, … well, you will need a truck to run it. But hey, if you have such a powerful machine, you can install the 70B one.

ollama run llama3:70B

Now you can write your prompt in the command console itself and verify that everything works

ollama-llama3-console

Bonus pack, if you want you can install a Web interface to interact more easily with your Llama 3 Chat Bot. There are several available, I like this one https://github.com/ivanfioravanti/chatbot-ollama, based on Mckay Wrigley’s chatbot-ui project.

chatbot-ui-llama3

To install it, create a folder wherever you want on your computer, and in this folder do the following.

git clone https://github.com/ivanfioravanti/chatbot-ollama.git
npm ci
npm run dev

And you have a web ChatBot connected to your LLaMA3 model running in Ollama.

How to use OllamaSharp

Now we can create a C# application that connects to LLaMA3. Ollama will take care of managing the models and data necessary to run the queries, while OllamaSharp will provide integration with your application.

We can easily add the library to a .NET project through the corresponding Nuget package.

Install-Package OllamaSharp

Here are some examples of how to use OllamaSharp extracted from the library’s documentation

using OllamaSharp;

var uri = new Uri("http://localhost:11434");
var ollama = new OllamaApiClient(uri);
ollama.SelectedModel = "llama3";
var prompt = "Hello, I'm Luis, how are you???";

ConversationContext? context = null;
context = await ollama.StreamCompletion(prompt, context, stream => Console.Write(stream.Response));

Console.ReadLine();

OllamaSharp is Open Source, and all the code and documentation is available in the project repository at https://github.com/awaescher/OllamaSharp