Loading Events

« All Events

  • This event has passed.

Running Llama and CodeLlama on a Laptop

October 11 @ 6:00 pm - 7:00 pm

Llama.cpp for CPU based inference

Hello everyone!

We’re back at HudsonAlpha this week to finish what we started with Llama and CodeLlama. Last time we covered the switch from the GGML format to the GGUF format – check the Github repo for links to all the relevant information (link below). We also talked a good bit about the quantization methods that allow the models to be reduced in size while introducing a minimal amount of error.

Now we’ll put that into practice with running Llama2 and CodeLlama through their paces on my laptop. I’m running this inside of a WSL2 terminal on Windows11 with an I7 processor and a 4GB GPU.

Please do some googling and come prepared with your best Llama2 and CodeLlama prompts to try!

We’ll be running this meetup as a hybrid in-person and on Zoom as well. The room we’ll be in at HudsonAlpha is set up for hybrid meetings with Zoom, which worked seamlessly last week.

Links:

Details:

Hope to see you there!
-J.

Details

Date:
October 11
Time:
6:00 pm - 7:00 pm

Venue

HudsonAlpha
601 Genome Way Northwest
Huntsville, AL 35806
+ Google Map