How RamaLama helps make AI model testing safer


Here, for context, is what I said:

I want to show you how easy it is to test deepseek-r1. It’s a single command. I know nothing about DeepSeek, how to set it up. I don’t want to. But I want to get my hands on it so that I can understand it better. RamaLama can help!

Just type:


ramalama run ollama://deepseek-r1:7b

When the model is finished downloading, type the same thing you typed with granite or merlin and you can compare how they perform by looking at their results. It’s interesting how DeepSeek tells itself what to include in the story before it writes the story. It’s also interesting how it confidently says things that are wrong 🙂

What DeepSeek thinks

I included in my email the results of a query I entered into DeepSeek. I asked it to write a story about a certain open source-forward software company.

DeepSeek returned an interesting narrative, not all of which was accurate, but what was really cool was the way that DeepSeek “thought” about its own “thinking” — in an eerily human and transparent way. Before generating the story, DeepSeek—which, like OpenAI o1, is a reasoning model—spent a few moments muddling through how it would put the story together. And it showed its thinking, 760-plus words’ worth. For example, it reasoned that the story should have a beginning, a middle, and an end. It should be technical, but not too technical. It should talk about products and how they are being used by businesses. It should have a positive conclusion, and so on. 

About WN

Check Also

What you need to know about Go, Rust, and Zig

Every language has a life cycle. Sometimes it starts with a relatively narrow use case …

Advertisment ad adsense adlogger