BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Huntsville AI - ECPv6.8.3//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://hsv.ai
X-WR-CALDESC:Events for Huntsville AI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20260308T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20261101T070000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260211T180000
DTEND;TZID=America/Chicago:20260211T193000
DTSTAMP:20260412T105713
CREATED:20260206T173602Z
LAST-MODIFIED:20260206T181634Z
UID:2144-1770832800-1770838200@hsv.ai
SUMMARY:Virtual Paper Review – SegementAnything3: Segment Video with Text
DESCRIPTION:Join us virtually Wednesday Feb 11th at 6 pm CST to continue our monthly Paper Review series! We will be dissecting SAM 3 (Segment Anything Model 3)\, a unified model that bridges the gap between geometric segmentation (clicks and boxes) and semantic understanding (text and concepts). While previous versions excelled at “segmenting that thing\,” SAM 3 introduces Promptable Concept Segmentation (PCS)\, the ability to find\, segment\, and track all instances of a specific concept (e.g.\, “striped cat” or “red apple”) across both images and videos. \nTopics we will cover :\n\nPVS vs. PCS: The evolution from Promptable Visual Segmentation (points/masks) to Promptable Concept Segmentation (noun phrases/exemplars).\nLocalization-Recognition Conflict: Why forcing a model to know “where” something is often conflicts with knowing “what” it is\, and how open-vocabulary detection has historically struggled with this.\nData Engine Basics: The role of human-in-the-loop vs. model-in-the-loop pipelines for generating massive-scale segmentation datasets.\nArchitecture: How SAM 3 fuses an image-level DETR-based detector with a memory-based video tracker using a shared Perception Encoder (PE) backbone.\nPresence Head: A look at the novel “presence token” that decouples recognition (is the concept in the image?) from localization (where are the pixels?)\nSA-Co dataset (4M unique concepts).\nVideo Disambiguation: Strategies for handling temporal ambiguity\, including “masklet” suppression and periodic re-prompting during tracking failures.\n\nLinks: \n\nPaper:\nhttps://ai.meta.com/research/publications/segment-anything-model-3-sam-3/\nCode:\nhttps://github.com/facebookresearch/sam3\nDemo:\nhttps://segment-anything.com\n\nDetails: \n\nDate – 02/11/2026\nTime – 6:00 – 7:30 pm\nLocation – VIRTUAL\nGoogle Meet – https://meet.google.com/drf-dydt-mgn
URL:https://hsv.ai/event/virtual-paper-review-segementanything3-segment-video-with-text/
ATTACH;FMTTYPE=image/jpeg:https://hsv.ai/wp-content/uploads/2026/02/SAM3-Paper-Review.jpeg
END:VEVENT
END:VCALENDAR