Skip to content

On LLM

Published: at 12:00 AM

I’ve started using ChatGPT for the last few months at work, and it’s super handy to ask simple questions about things like how to type something more complex on TypeScript, or questions about Ruby. But I wanted to try a little bit more locally installing an LLM using Ollama, and well results were a bit disappointing.

I got llama 3.1, allegedly the latest best LLM done by Meta (ok it was the smallest one but) and I asked it to do a function that will generate aircraft tail numbers by country. Why? Because each country has its own rules and I wanted to skip the research.

Turns out they were all wrong, the function only covered numbers (most tail numbers use letters) and a quick online image search to see the format of those tail numbers showed me that the pretty much 80-90% of them were wrong, like suggesting that Canadian use CF- when they do start with C-, that EU countries will start with the 2-digit country code when germany uses D-, etc. I had to do them again from scratch.

So yeah so far ChatGPT seems better, probably the big llama (229 Gb!) is better, but for now I’ll keep it for simple questions about coding.