BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Huntsville AI - ECPv6.8.3//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Huntsville AI
X-ORIGINAL-URL:https://hsv.ai
X-WR-CALDESC:Events for Huntsville AI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20240310T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20241103T070000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20241106T180000
DTEND;TZID=America/Chicago:20241106T190000
DTSTAMP:20260419T021704
CREATED:20241030T030545Z
LAST-MODIFIED:20241102T174915Z
UID:1766-1730916000-1730919600@hsv.ai
SUMMARY:Structured Outputs with Finite State Machines & Open Source Speech to Speech
DESCRIPTION:We have two guest speakers this week\, Josh Phillips & Charlie Rogers! Josh has covered a few topics for us in the past\, but this will be Charlie’s first time presenting for us\, so please come and engage. \nTopic 1 (Josh Phillips) : Structured Outputs with Finite State Machines \nDiscover how the integration of finite state machines with guided decoding can significantly enhance the stability and reliability of Large Language Models’ inference processes. In this session\, you will delve into the intricacies of structured generation\, learning how it not only ensures strict adherence to desired formats but also accelerates processing speeds by up to five times. \nExperience firsthand demonstrations of various pipelines transitioning from unstructured to structured outputs\, illustrating the ease and effectiveness of implementation. Join us to unlock advanced techniques that optimize both the performance and consistency of your language models. \nTopic 2 (Charlie Rogers) : Let’s Talk It Out : Open Source Speech to Speech (S2S) \nDive into the world of open-source speech-to-speech systems and discover how to build efficient pipelines from scratch. Learn about the modular approach\, explore user-friendly tools\, and learn from Charlie’s experience with automating client-server architectures using RunPod. Ideal for developers and researchers looking to advance their speech processing projects. \n2025 AI Symposium: \nWe should also have representatives from the Space & Rocket Center AI Symposium stop by to give an introduction to the symposium for next year. Last year was a lot of fun\, and this year appears to be even bigger. I’ll drop a link to last years sessions below.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLinks & Other Events: \n\n2024 AI Symposium Recorded Sessions – https://www.youtube.com/playlist?list=PLvvHQqQynqmtkQuLsvfFtmy0OohBcA5H4\n2025 AI Symposium – https://www.rocketcenter.com/institute\nHuntsville AI and Machine Learning Technology Exchange and Expo – https://meetingsevents.reg.ext.hpe.com/event/0144e3b2-0951-4eea-b9a9-21be49da3441/summary\nAI Innovators of Huntsville (Andrey’s group) at GigaParts –https://events.gigaparts.com/events/gigapartshuntsville/1422131\n\nDetails: \n\nDate – 11/06/2024\nTime – 6-7:30pm\nLocation – HudsonAlpha\nAddress –  601 Genome Way Northwest\, Huntsville\, AL 35806\nZoom –https://us02web.zoom.us/j/85119272393?pwd=A90PaqdUl1SEz6hyB8PFDbhTN3dKTB.1\n\nWe had a fantastic turnout for the social\, and it was great to meet a lot of new people coming from different backgrounds. Thanks to Phillip Lee with ISSA for boosting our exposure with the local cyber community! \nAs always\, I really appreciate the support and replies to these emails. You can also help by following\, sharing\, liking\, and dropping comments on my posts on LinkedIn and Facebook – especially the ones directly for the Huntsville AI page on LinkedIn – https://www.linkedin.com/company/huntsville-ai \nHope to see you soon!\n-J.
URL:https://hsv.ai/event/structured-output/
LOCATION:HudsonAlpha\, 601 Genome Way Northwest\, Huntsville\, AL\, 35806
ATTACH;FMTTYPE=image/png:https://hsv.ai/wp-content/uploads/2024/10/2topics-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20241120T180000
DTEND;TZID=America/Chicago:20241120T190000
DTSTAMP:20260419T021704
CREATED:20241030T030833Z
LAST-MODIFIED:20241120T050446Z
UID:1769-1732125600-1732129200@hsv.ai
SUMMARY:Overview of Text to Speech Approaches
DESCRIPTION:Thanks to Josh Phillips for hosting last week while I was out! I should have the recording up soon at https://hsv.ai/videos/ \nThis week we will take a look at Speech to Text models in three different categories: \n\nProducts that create audio from text in an offline mode\nAPIs that can be integrated into a product\nOpen source models that you can host\n\nEach of these presents different challenges that we’ll cover such as latency\, realism\, and hallucination. Here’s the list of products and models so far\, so if you don’t see your favorite in the list\, let me know and we’ll check it out as well: \n\nParler\nCoqui\nBark\nOpenAI (6 models)\nBASE TTS (Amazon)\nMetaVoice\nMeloTTS\nElevenLabs\nFacebook MMS\n\nAlso – a few of us went to the Huntsville AI and Machine Learning Technology Exchange and Expo last week\, so we might do an overview of those topics if time permits.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLinks & Other Events: \n\n2024 AI Symposium Recorded Sessions – https://www.youtube.com/playlist?list=PLvvHQqQynqmtkQuLsvfFtmy0OohBcA5H4\n2025 AI Symposium – https://www.rocketcenter.com/institute\nHugging Face Text to Speech – https://huggingface.co/tasks/text-to-speech \n\nDetails: \n\nDate – 11/20/2024\nTime – 6-7:30pm\nZoom –https://us02web.zoom.us/j/89971705398?pwd=CndWhnWX6sbtgQLaAbn8CctPjzcxjV.1\n\nAs always\, I really appreciate the support and replies to these emails. You can also help by following\, sharing\, liking\, and dropping comments on my posts on LinkedIn and Facebook – especially the ones directly for the Huntsville AI page on LinkedIn – https://www.linkedin.com/company/huntsville-ai
URL:https://hsv.ai/event/text-to-speech/
ATTACH;FMTTYPE=image/png:https://hsv.ai/wp-content/uploads/2024/10/Text-to-Speech.png
END:VEVENT
END:VCALENDAR