- This event has passed.
Running Llama and CodeLlama on a Laptop
October 11 @ 6:00 pm - 7:00 pm
We’re back at HudsonAlpha this week to finish what we started with Llama and CodeLlama. Last time we covered the switch from the GGML format to the GGUF format – check the Github repo for links to all the relevant information (link below). We also talked a good bit about the quantization methods that allow the models to be reduced in size while introducing a minimal amount of error.
Now we’ll put that into practice with running Llama2 and CodeLlama through their paces on my laptop. I’m running this inside of a WSL2 terminal on Windows11 with an I7 processor and a 4GB GPU.
Please do some googling and come prepared with your best Llama2 and CodeLlama prompts to try!
We’ll be running this meetup as a hybrid in-person and on Zoom as well. The room we’ll be in at HudsonAlpha is set up for hybrid meetings with Zoom, which worked seamlessly last week.
- Discussion from last week – https://github.com/HSV-AI/presentations/blob/master/2023/231004-CodeLlama.md
- Llama.cpp – https://github.com/ggerganov/llama.cpp
- Model Size comparison – https://github.com/ggerganov/llama.cpp#memorydisk-requirements
- Code Llama Blog Post – https://huggingface.co/blog/codellama
- Date – 10/11/2023
- Time – 6-7pm
- Location – HudsonAlpha
- Address – 601 Genome Way Northwest, Huntsville, AL 35806
- Zoom –https://us02web.zoom.us/j/86171518243?pwd=WEMwaExGZGVKYVVOZnRpUGxDb1JKZz09
Hope to see you there!