2025 Kickoff

2025 Kickoff

Transcription provided by Huntsville AI Transcribe

All right, there’s that, and I’ve got, I actually put a sign up so that if the door gets closed somehow people can actually call me and get in, which is nice, because I actually get here just before five when they lock that door.

Oh, I see.

And so, if you ever want to hang out for a while before the meeting, you know, come early. Showing is okay, you can hack on some stuff while we’re sitting in here, as you usually want to polish up slides or whatever it is we’re going to talk about, and I’m glad I did because I was able to look and see that there were actually new SBIR topics that dropped today.

So, if you follow up from what we did last year, we used to, SBIR topics used to drop three or four times a year, you know, March, August, whatever.

They changed that, so they’re dropping every month on the first Wednesday of the month.

Oh great, I knew that last time. It helps spread some things out over the year, so you don’t have to, if you have a budget for writing proposals, you don’t have to blow it all at one time, you can blow it across the year. So, there’s that, but it also means you have to kind of check every single month to see what came out. So, we’ll walk through some of those.

Didn’t go away.

We’ll cover that. So, welcome to 2025. Okay, yeah, appreciate it, David. We are, I don’t know what you would call the theme for this year.

Last year was the the year of the RAG, I guess.

We did RAG stuff throughout the year. We had what was RAG. We also had some big splashes on video. So, Sora dropped and then you had some others that kind of followed that up, doing some interesting stuff, that route. Let’s see, we had some stuff last year also.

I don’t want to spend too much time covering it, but some of the normal events that we do for Hudson Alpha Tech Challenge, we participated in that. We bailed out on the Space App. It was the first time we didn’t do the Space Apps Challenge in probably eight years. I’m not quite sure, Urban Engine used to kind of put it together and hosted. They couldn’t last year, so I think the Monkey’s Wrench tried to do it and I think they did something, but it wasn’t nearly at the level this was. Let’s see, we did, there was the AI Symposium at the Space and Rocket Center last year, as well as the meetup Space and Rocket. It wasn’t, I can’t remember what the name of it was. I think it was Huntsville AI Symposium that they, I think it was HPC put on, mostly sponsored over at, which hotel was that? The one on Bridge Street. The what?

The one on Bridge Street.

Yeah, it was one over Bridge Street, that one. So, there was that one. And this, we did that.

The other stuff, just, I think we did 25 sessions last year. So, just trying to keep up that same rhythm. We’re looking at doing some additional stuff, which I’ll talk about in a minute. So, first up, coming up in January, there’s a, the actual Space and Rocket Center AI Symposium.

That’s January 27th, I think through 28th, 29th.

It’s in the middle of a week. I’m actually doing a talk on that Thursday, covering kind of how to learn AI and how to keep up with AI. I’ve got slides in my head, but they haven’t made it on paper yet. It’s something I wouldn’t mind chatting about a little bit. I’ve kind of got this, it’s kind of a framework I’ve used for a while. Thinking of it as far as, I’ve got, let me switch this over to camera control, see if I can flip this over. Oh, actually, I just need to stop sharing. And I’ll cover this real quick and see what you think. Oh, because we got markers. And this is something I use a lot.

It’s just this kind of a context. So, I’ve got maturity. Actually, I shouldn’t make that, that’s my y-axis, is maturity. So, we’ll flip that over here. And then this one, I’ve actually, I don’t know if there’s a better word for it, but I’m calling it churn. This is how often does it change?

What are the chances that you learn something and you throw it away a month and go learn something totally different? So, there are places that have high maturity, but still high churn.

That’s okay.

I mean, it’s at least, you’ve got something that’s been out there for a while, but it’s changing a lot. So, the parts that you want to stay away from is low maturity, high churn. So, this is not really where you want to be. If you’re up here and you have the bandwidth to keep up with all the papers dropping and keep up with all the news and keep up with all the updates, that kind of stuff, this might be an okay place for you.

If you are just learning something brand new, you probably want to be over here.

If this is your first delve into learning AI, you may want to pick some things that aren’t changing every month, because by the time you learn it, you’re trying to learn something new and you think you never get it under your belt. So, that’s one concept. The other thing I’ve been dropping into a lot is I started off with looking at it as far as, I’m still looking for a better word than the technology.

If I were to say things like image recognition or reinforcement learning or CNNs or you know LLM, what would you call that collection of things? It’s not technologies. It’s a combination of domains and methods in my head. Okay, domains, I’ve got that.

And you can apply this to each one of those things.

So, from image recognition, we’re pretty sure that that’s been around for a long time.

There’s a lot of things that are out there that do image recognition. I mean, heck, getting on the arsenal, I mean, you got an image recognition lane, they don’t shut it out, you know, all that kind of stuff. And it’s not a ton of churn at the moment. There was some churn and it went well. It depends. I would say you’re missing like an entire layer here, which is all this thing, but layered on a use case.

So, if you put it that way… My next slide is use cases.

And that’s, well, I’ll go ahead and finish. I got four things. One of them was hardware.

And then the last one was data. And I’m still not sure whether to put that in there in this piece or not.

There’s some data sets that people have used forever, you know, ImageNet and others.

I can’t think of the one with the characters on it.

But use cases, definitely.

What’s fun is there are use cases where we have flipped out the domain or the method that we use to do the same kind of a thing, especially with like, I mean, look at natural language work.

I mean, back when I first played around with it, we were using Bayesian methods to figure out which, you know, doing term definition, frequency, all that kind of stuff.

And from there, moved into, you know, word vectors and things like that.

From there, moved into semantic search and semantic learning, you know, things like that. And then an LLM thing drops and we’re like, oh crap, we don’t even do the things we did before. So that’s just thinking through how you would, how would you expect if you got somebody coming in that’s trying to figure out, hey, I’m new to AI and I’m a technical kind of a person or I like to hack around with things, where do I start?

You know, and I would actually point people at use cases first.

Sorry, what problem were you trying to solve? Or what problems are you interested in?

Because the other thing, you can be extremely smart on domains and methods and you can think your thing is working great, but if you’re not knowledgeable enough on the use case you’re applying it to, you may think it’s great and then put it out in the wild and just get nailed on Twitter or Reddit or whatever. Kind of like the ones that did the, what was it, the image thing where they would take a pixelated image and then make it high res and they figured out they had only trained and old people that looked like me. Oh yeah, they would take any pixelated image and turn them into a, oh my god, you know, and that was, it was really, really funny and it was at least good that it wasn’t in something that had a high impact, you know, so it wasn’t, you know, they’re not using that to get on Red Swan Arsenal. All right, that’s just the general thought. If you got any thoughts on that or, you know, ways to talk about it, because this AI symposium in, the AI symposium that’s coming up is looking probably, I think last year we had what, five, six hundred people there?

Something like that? I mean, it’s a pretty large gathering of folks.

So it was, and it was, last year was the first time they had done it, so it was kind of interesting to see who they brought in to talk and what everybody was looking for, things like that. The best session I got out of it last year, they actually had four different people based off of the Arsenal.

They had somebody from NDA, somebody, I can’t remember the four that were there, just talking about AI in their arena, which was really interesting because some of the problems they run into are not the same problems that we run into from the commercial perspective. You know, fun with air gaps a lot, and fun would try and get something approved that most of us would go, okay, I use that all the time. Anyway, that’s fun.

Let me switch back over, talked about that probably enough. Okay, back to the switch here. All right, so that is coming up in January.

The next one that’s popping up as far as events, they have a Hudson Alpha Tech Challenge every year. We normally participate as, we’ve had folks that created teams before.

We’ve had folks that were on winning teams before. A lot of, it is a bit of a time draw.

It’s over a weekend.

They kick it off on a Friday. It’s the last weekend in March, first kind of rolling into the first part of February. They also really need technical mentors. So if you’re good at helping set up a website, or help set up looking at data, or, I mean, they’ve got all kinds of different skills they’re looking for. The primary folks coming in for this are high school kids, college age, you know, early career, you know, kind of thing. Last year, I’m trying to remember where the folks, it was a crew from like down near Eufaula, Alabama, where some high school coach or mentor or something just brought like 10 kids up here. You know, they’re staying at a hotel around the corner and just, they thought this would be interesting and something new. I’m like, you’re right. So they’re doing that. So there’ll be some info going out on that.

It’s generally got some good, if you’re really interested in the biotech industry, it’s also a way, if you’re smart on AI or smart on computer science or smart on data science or something and you’re interested in learning more about biotech or how it’s a really good way to connect with some of the thought leaders that are working out of this building. You know, where a lot of times it’s not like you’re going to get, you’re not going to get access to FaceTime because everybody’s busy, you know, and understandably. So that’s coming up.

So I know there’s been some chatter on the discord that we run. There’s at least one person looking to put together a team.

So if you’re interested in that, they haven’t even dropped the challenges yet as far as what they’ll be. There’s generally a little bit of prize money.

It’s really not worth, you know, it’s not worth the hours that you’re going to put in, but it’s kind of fun. Is that one person being? It might be.

Well, I mean, I threw it out there in the discord chat.

Okay, yeah, that was you.

I can’t put names all the time together with what we call ourselves. I’ve done it a couple times, you know, to your point about being available.

Yeah, I mean, if you’ve got 72 hours straight, that’s helpful, but I mean, I’ve only usually like six to eight hours to. Also, I don’t know when they released it.

I thought they did it like right the day out. Yeah, they did release three of something showing what they were doing. So there’s some sort of a… There’s 2024 information out there. You can go look at what they did last year.

Right, I didn’t see any for this year.

And usually they won’t release like data.

Sometimes they’ll tell you what the challenges are going to be, but they won’t release the data.

You’ll get to see like, hey, we’re going to do, but the data won’t be available. Yeah, and it’s been from a data science and AI perspective, it’s kind of been hit or miss as to whether the challenges they throw out there are viable or interesting or, you know, a lot of times you spend the first, at least for me, I try to work through the mentoring like the first day when people, especially you’ve got folks from a high school perspective, trying to figure out what do I do with this data. You know, I’m in front of groups. I’ll come and I’ll find people that’ll be excited on day one and then they’ll just fizzle. Yeah, they get along. I’m like, well, I mean, I understand, but let’s… I want to take it all the way through to the brief. Right. Personally, that’s what I would like to do. I don’t want to be a mentor. I could be a mentor, but I would like to do the challenge and do the brief. Well, we should be able to put together at least… I want to put this brand new RTX 4060 in there. Oh yeah, that’d probably work and yeah, the reason I usually do the mentor thing because they’ll give you a block of like two or three hours and you just come hang out, you know, go help where you can. Bring food in and stuff. I mean, it is mostly high school and college. Yeah, I don’t know if you were here last year.

I was helping and they actually… it was time to go and I’d get… my wife had dropped me off and she was coming back to pick me up. I got a call saying, hey, they’re having a 5k race and I can’t get there. Like, what are you talking about?

The one where they’re doing it around the DNA. They did the Huntsville color run or something. It happened to get scheduled the same and you had parents trying to come pick up kids here that can’t get here because the roads closed around the… you know, it was kind of interesting. I remember I was working with some of the organizers and they were talking that, you know, the police had closed down the road. Like, you can’t do that. It was like, what do you mean? It’s like, our name is literally like all the roads, you know. Well, that’s a genome way, but anyway. But it is generally a lot of fun and if you can’t, you know, if you can’t be on a team, come hang out, do some mentoring. A lot of the questions you answer are generally Python related because they tend to go that direction. Sometimes it’s a, hey, I need to figure out how to make a web server that does this or that, you know, because I’m trying to present my stuff. Knowing how to use either blast or streamlet or something cheap and easy to get them off the ground quick is usually okay.

Oh, so that’s fun. After the weekend after that is a Huntsville B-Sides conference, which I don’t know if anybody here has ever done stuff with. You have, okay. It seems to right now be related to the security stuff, cyber stuff and whatnot. Yeah, B-Sides is primarily a cyber security conference.

It is also specifically designed to not be what you would consider to be a normal conference. It’s not like the cyber security summit where you get to talk to all kinds of industry, like all kinds of businesses and everything. No one’s going to try to sell you anything. It is, B-Sides actually takes place in multiple cities throughout the year and it’s designed for people that are actually, they’re either they’re just starting out in cyber or they’re in, they’ve been a cyber security professional and they get to do, they get to talk about what they’re interested in. Sometimes you get people that have never presented before, sometimes you get repeat people that are coming through or they’re getting ready for a bigger conference and they want to, they want to give their presentation.

There are some workshops also, usually specialized around, let’s see, I think there was one that’s been going on for a couple of years about using PowerShell for some beginners log analysis, that kind of thing. It’s very cheap.

The workshops usually aren’t, but that’s on the Friday, the first Friday of March, and then Saturday is when all the panelists are going to be there.

And there’s a capture the flag competition usually, there’s a there’s a raffle at the end of it, I think it’s like $15 for submit to get in, another $10 if you want a t-shirt because they’re pretty cool.

If the Cyber Summit is any indication, there’s probably going to be some AI talks showing up there as well. It’s at Calhoun Community College, so easy to get to. I don’t think they tell you if you’re parking, unlike some other places where we’ve had fun conferences. I’ve been approached, I think, by somebody that was involved to see if there’s anything we could put together to go talk about. I don’t really know a whole lot about cyber or anything like that, so I may just go and learn stuff. It’s a good place to go and just find out something new. I actually really enjoy it just to see if there’s anything new and upcoming that I haven’t seen yet. Maybe you end up being cheap. The AI symposium is not cheap. Unless you guys know a promo code or anything.

There’s no promo code, and I talked to them about this last year. I actually said I wasn’t gonna talk, and I didn’t. I did go. Luckily, Cohesion Force paid for my ticket, which I appreciate. It’s good work for a company that does stuff like that. Personally, I wasn’t going to swing. I don’t know how many hundreds of dollars it is. It hasn’t gone up yet? It has. That’s like the after December 27th.

I was looking at it.

It’s $750 for the all-access pass. Definitely not cheap. They do, however, after some period of time, I don’t know how many months to actually post videos of all these sessions on YouTube, which is, I think, pretty good. You can actually go look at all the ones from last year and get an idea of who’s talking about what and things like that, and then figure out whether you think it’d be worth going or not, especially knowing that they’re going to drop these on YouTube somewhere. If it’s not something you immediately have a need for, maybe wait. The next thing I wanted to drop into, this is the first, actually, technically it’s not the first Wednesday of the month, but they file the DoD SBIR topics.

There are several that moved from, what do you call, the pre-release to open. Some of these we may have talked about already. I’m going to drop the ones that are currently in the open state and just look at what dropped today and looking initially at anything that is AI-related.

Unfortunately, I used the search thing before I got here, and a lot of these, some of these aren’t marked the way you think they would be, so searching just for learning or artificial doesn’t necessarily get you all the stuff, but it’s only 16 of them. I don’t plan on going through all of them, just the ones that I know. And the other weird thing this year, there are several of these that are both SBIR and STTR that look like they’re the same topic, and it’s like they put them out on both trying to see what they get, which I don’t know that I’ve seen that before, which seems to be a little odd.

Is it the same agency?

Yes.

Okay.

I mean like you’ve got this first one, army2025.4.

Yeah, that’s one of them.

Following that.

Yeah.

I’m pretty sure they’re word for word the same as well, like interesting. Yeah, so I think your point, I think it was they just wanted to open it to Sibrand Sitter, which I don’t know, I don’t know why, but I don’t know. Maybe they have Sitter dollars they have to spend. Oh, and some of these are wide open enough that I can see making it a, you know, what is, you know, some of this you could even look like as a, what is it where they put out a, it’s not a it’s not an RFP, RFI. It’s almost like a, you know, what are you doing that’s close to this?

You know, anything like that.

This one that was a little, I’m not sure if I’ve got enough info.

So it sounds like they want LLMs to do the things. To do the work of integrating from various sources.

It looks like an adapter of some sort.

So they’ve got a lot of data in a lot of different formats coming in a lot of different feeds, and right now as a person you could look at this and go, hey, this is the same thing, just format it a little different.

Yeah, you know, it’s almost like reading resumes.

Everybody’s got their own format.

Most of them have the same info on them.

So imagine using an LLM or something to make it possible to move info from one system to another, and if you could do it in a way that it couldn’t tell.

It sounds like another thing from your cyber talk. Yes, I could fake it, make you think I’m sending actual info. Fun. So you’d really have to get into some of the, you know, maybe some of, actually even the references, it would be helpful if they were like, I have these two systems and I’m trying to get them to talk, or I have, you know, it’s a little bit amorphous.

And what is it they’re actually asking for?

You can’t tell the Chinese everything. No, you can’t. No. So that’s one of them.

Then again, this one really didn’t have a lot of, you know, if I wanted to talk about what the topic is, it’s really, well yeah, they’re systems and they speak different languages and they have different things and they want to connect them and make them interoperable. It’s never a good sign when the references for one of these is a bunch of news articles instead of papers, is what I generally have found.

Well, it’s going to be tough for them to talk about what they actually want to interoperate at like the unclassable.

Because of, I mean, then the annoying part is like this problem, I feel like really from like a tech perspective of like an interoperability is not incredibly difficult, but on the DOD side, just being able to get access to all the data and then passing it around at the proper classification levels, that’s where it becomes virtually impossible. And so, yeah, I’m a bit pessimistic on data in the DOD, but this is, yeah, leave it at that, I guess. If you were at a company working with multiple systems that had different things and you had access already to knowledge and data and things like that, and you could build an LLM, this might be something for you, it doesn’t appear to be written. Some of these you come across and it seems like some of these are written directly to some products that they’re trying to get wedged into their system, but this seems like it’s a little wide open.

So, it is going to be maybe a smaller selection of folks that are capable of writing to this based on, you know, well sure, DOD, give me all your systems in your info and I’ll tell you how to connect.

It’s gonna be, sure, no.

This one was actually really interesting to me. It doesn’t say AI, but autonomous drone swarm sounds like AI. So, this one, they have something that actually is kind of like a floating bridge if you want to do a river crossing in the military, and to set these up they use basically floating pontoons and things that this thing sits on top of, and they’re trying to find a way to autonomously get these things out into the right location so that they don’t have to put a person in each one of those. So, it sounds fun. I would love to play around with it, but I don’t have any like autonomous drones that float or, you know, anything.

This would be interesting to apply some swarm theory or swarm communicating, you know.

So, I mean, there’s some things you can do that might be fun.

Also, what we could test it down on the Flint River somewhere. I mean, there’s some places that this would work. Has anybody got any floating drones?

We were talking about earlier, you could use UAV still, and instead of air it would be an aquatic vehicle, and you could confuse everybody when they show up looking for their helicopter drones, and they get a boat. So, that one drops, and I always have to remind myself to complain about the horrendous user experience of what this site is with all the scroll bars.

I’m definitely with the right group.

Oh my gosh, we hate this thing.

Several of these, I didn’t even notice the DARPA ones earlier this morning.

That’s why most of the DARPA stuff, at least this round, all went to sitters instead of sibbers.

So, I didn’t get too much in there.

They are looking for some pretty interesting.

How do I take bias?

If I find something in a data set that I’ve trained on that’s causing some kind of bias in my model that’s not intended, how do I take it out?

If my model has happened to have learned something, and I want it to not follow that path, or how do I, you know, maybe clipping something.

Well, I don’t know what other options you’ve got. I know I’ve talked to a couple of folks about this from a code gen kind of a perspective.

Let’s say I’ve got a lot of code that I trained a model on, and now I’m using it to generate new code or do things like that, and then somebody runs this code I trained on through some commercial scanners and things, and they find some problems. Well, if there’s a way to remove that from the thing I’m about to generate, because I’ve got folks clicking buttons, and it’s basically your new stack overflow, if you will. So, of course, you copy and paste initially, and then see if it works or not.

If it works, you think you’re great, but you may not know that there’s a vulnerability that you just inserted into a code base.

The stuff I work with, you’ll find it as soon as you commit, but it’s this way, an interesting thing.

Yeah, but there’s been some efforts, and it’s more on your kind of your traditional supervised learning. If I have, you know, specific data pieces that I’m passing in, I could, for simpler models, I can go in and figure out how much that is corresponding to various weights, and then potentially augment weights at the end of it based on what it would have been had it not seen that data, like, passing through, but for, like, a large generative model where you have billions of parameters, like, each individual weight is so minimal that it becomes, like, here it becomes an incredibly difficult task on, like, how do you, how do you pull that out without some sort of, like, post-processing method like you’re talking about, or you make a model that learns your model and figures out which 800 weights it would have to pull. I, yeah, it’s, yeah, so fun stuff. Just make a two-step process and put it in a black box. Right. And put a, you have to have a symbol that says, magic happens here.

Yeah.

Go to the linter yourself before turning it in. Wrap it all and just call it the larger language model, or language model pro max. You know, some of the stuff I’ve seen, uh, there was one, if I go, we’ll take another quick look through, and for this one I am going to add more filters.

I want to go back to, I forgot where the, okay, just the open ones.

Just click that. Okay, and because there’s 167 currently open.

Oh my god.

It’s the first big batch that they did in December.

And some of these were somewhat interesting. I don’t know anything. This one was a, if you were big into, uh, signals and, uh, try to make the other word, kind of intel type stuff, that was some interesting stuff that I really don’t know enough about it to really speak. If you are in love with digital twins, this one I have no idea how you would do it. They’re looking for an AI approach to figure out whether a digital twin is actually representative of the thing it’s supposed to represent.

So if I have an actual system and you give me a digital twin, is there some way to evaluate where the behavior of the digital twin matches the behavior of the actual system?

So I have no idea.

I guess if the system was entirely digital to start with and you’re trying to get a digital twin of a digital system, I could see, but if you’ve got a physical, I mean, the main use of this is for a physical system and how do I make something digitally that batches so that I can actually do some test or some kind of an interoperability, you know, check or something, but it was interesting because a lot of these things, if you put digital twin in the title you can get funding.

If you put digital twin and AI in the title, oh my gosh.

I just showed up at your door.

Yeah, I mean, we need at least two coats of that at least.

I don’t know what a radome is.

They’re looking for some way to automatically get that one.

This one was actually interesting.

So if you have a fence line, let’s say, around an installation and you needed to track drones flying over up to 50 feet above the fence line from an optical sensor perspective and find a way to space these out along your fence line and then actually feed information back to some other system to alert you if something flew over your fence. Interested in centuries. Something like that. And they’re looking for optical, which has a lot of interesting, we were looking at something like that a long time ago when drone detection just came out because a lot of these things are plastic. They don’t have a, you know, they don’t have a lot of return on a, you know, you can actually hear them better, you can see them half the time. But then trying to figure out how can you see it, especially if it’s dark, especially if it’s raining, you got to figure out a way to keep your sensor clear, you know, things like that. It’s an interesting problem. If anybody was, these are ones that just went open today.

So you got 30 days if you want to put together a proposal. Let me go ahead and close that one. Here’s a sitter.

I’m not sure if this one was written specifically, you know, towards one particular kind of tool or not.

But it sounds cool, sounds doable with what they’ve got. But again, that’s a sitter and I don’t have a way to go after it.

This one was interesting. I’m gonna skip this one. I don’t know anybody that has a bunch of buoys they could drop in the ocean and then figure out how to, I mean, with this one you actually needed some heavy insight into what these signals look like already. And this one felt to me like it was written towards a specific thing that may already be out there. Automated writing of problem reports.

This one was fairly interesting.

So this is specific to the Aegis platform, I think is where this came from. I think, I see an ACS here somewhere.

It’s in the description.

Yeah, yeah, Aegis isn’t based.

Okay, yeah, there it goes. So when they get problems, they get them from a bunch of different people and they’re all, if you dealt with the general public, you’ll get everything from, hey, it didn’t work, to a thesis on why it didn’t work and what you should do to fix it. So this is basically trying to find ways to get all of the info out of these things. I don’t know if they went as far as, you know, not only getting the information, but also asking more questions to whoever it was that was putting things in. This one seems to be much more aligned with things that we’ve played around with in the past. So you’re going to write the problem report autonomously so that an agent can read all the autonomous?

I think it’s you get these from users, these problem reports, and it could be that, it could be some system where you’re telling me your problem, and I may know of info that I need to ask you for.

Okay.

You’re like, hey, it didn’t work. Well, okay, what button did you push last? Or what, you know, there could be additional things even probably going back to the user for, you know, what did you do immediately before the problem you saw?

And then standardize that so that the next people can… So either people or either maybe another, maybe another agent or another system could pick that up.

You possibly could start working through a way to know out of all the problem reports you got in the last year, which ones were actually effective?

Which problem reports got fixed within a week?

Let’s start with those, because either it was an easy problem or the report was written well enough to actually go diagnosing quickly.

You know, I mean, there’s some, that’s a decent thing. It bothers me that one of their reference, I mean, some of the stuff, well, hey, just let’s just reference the NIST bomb and some other stuff and just say, okay, we’re good. Do NIST. So I can’t really tell whether this is written towards a system that may or may not exist.

So there’s that.

If you do a lot of missile defense stuff, they want an adaptive trainer. This one, unless you are actually working on one of these systems already? Oh, this is actually the one, Tony, this is the one that had AR and VR in it.

And doesn’t really get into it until the bottom, and it says, by the way, it should use VR, AR, or other immersive technologies. So it’s a training? Yeah, they want some kind of a training tool for, I’m not quite sure who exactly they’re trying to train, because they said, you know, they’ll talk about operators, but these systems have lives, so operators doing different kinds of things. So, but it was pretty interesting. So they’re looking to update the training content as the battlefield changes paradigm, so you don’t get caught behind. Just so you’re not training on 1980s methodology. Old 2024 data, as forever ago, geez, like three AIs in it now.

Yeah, and that is N251-35.

That one was caught my eye. Responsible. This one, I’ve actually started to see a decent amount of traction. Let’s see, machine learning rapid approaches, sonar. Oh, this is all data synthesis, so they’re trying to synthesize new data. It’s valid from a physics perspective, and that’s okay, I get it. Oh, interest, okay. No, that’s interesting.

Oh, do deep fakes catch your attention?

What are they doing here?

They’re trying to deep fake a sonar. I guess the next part, I mean, there’s probably another one trying to detect deep fakes for sonar. Oh, so interesting stuff. I’m going to keep moving because we are at 6 47 as far as time.

Quick look tool.

I’ll hit this one, and then the other one about weather. And I don’t really, this looks like something they’re just trying to use AI to do something if they thought they had a problem with, and I’m not quite sure what, where to go with it, you know.

It’s like, hey, we’ve got a bunch of test data, and different people with different backgrounds or experience levels look at it, and they disagree.

Yeah, it happens. Oh, that’s a good one.

We even have experts that disagree sometimes.

So do I. I don’t know how you’re going to get an AI model to help with some of that, you know.

Maybe a wish.

The other one, and this was actually, wherever, where was the one for weather prediction? Yeah, this was actually Navy. So one of the podcasts I was listening to over the holidays actually had an interesting concept of using an AI model to actually predict weather without using the physics models or anything like that.

Just a, hey, these patterns seem to have happened a lot, and given this thing that you observe from either satellite or radar or whatever, what do you think the next, I mean, it’s like, hey, predict the next token, really. The intent there was to find ways to put some of these models out into places that don’t have NOAA or functional governments or, you know, you can’t just hook up the radar on your phone and see what’s coming, you know.

So, and a lot of the data that you need for like a physics model has to have so many observations reported over the past.

You can imagine something similar at sea.

You know, I’m not near a country that has 800, you know, or 100 years worth of, you know, all the data.

So how do you, how do you sparsely, anyway, it seemed, it seemed interesting. Google actually really, I think it was a couple months ago, they open sourced their GraphCast model, which is an AI model for predicting weather, but they use 30 years of the European equivalent of NOAA to build like that model, but a lot of the same things here. It’s a very, very difficult problem because of the size of the data. So I can’t read the, I wish I, if I can remember, I’ll post on at least Discord what, what podcast I had, but it was really interesting. They’re turning these, they’re turning these predictions around in like an hour, whereas your GFS model or whatever, I mean, they’re running on supercomputers and it takes like a day or more.

They’re just taking in like the budget sensors or?

For NOAA or for?

For whatever this, the thing Google did. So Google took all of that data as input, so then you can, it will generate, or I guess generates the wrong work, but it can, it’s more accurate for global weather forecasting. So basically, if I have some, some known state, Google’s GraphCast would be able to propagate that out for, I think it’s up to 10 days in the future in under a minute.

So they take some current state and then be able to propagate that out up to 10 days into the future, which you can get from like the, the, the NOAA forecast, the way that they are published.

You get like the, the current time of like all of your sensors and everything, and it gives like a one step run of their big model, but then they also run their models for up to, it’s between three to four weeks, depending on the frequency and resolution that you look at.

So I’m assuming that Google did something very similar, where they took all of these past forecasts that NOAA had, compared that to like the truth data that you get, and then be able to build a, build a model to do that like 10-day forecasting. Is that some sort of a graph model?

They call it GraphCast, the whole would assume.

I’m assuming it’s not.

Yeah, because hopefully. Yeah, most weather data is, you can almost think of it like, like voxels or pixels, because you subdivide it by latitude, longitude, can go down to, the smallest I’ve seen is like quarter degree, but then you also have, going up, you have different pressure levels, so you end up having this like grid of pixels, essentially, as you can use graph models to relate the pixels as they relate to each other, or you can use something like a convolutional neural net. It sounds like the same thing, kind of like they do with Whisper to like, you know, yeah, so, but yeah, it’s a cool, yeah. I think it was 2019. One of the sessions we did, we were playing around with using recurrent neural nets to actually walk through radar. We connected up with all of the radar stuff that’s posted on AWS. You can get it for free. Figured out how to load that in and basically try to predict what the next frame was going to be. That was, that was five years ago, and crap, six years. Wow, 2025 is killing me.

Luckily, I haven’t written any checks yet and had to scratch out 2024 and then go right through it. That’s some pretty interesting stuff. The other neat fact, I’m assuming it’s a fact, because they said it in a podcast, we all know podcasts are factual, was that we’ve been, weather prediction has been progressing as far as how far out they can forecast, about one day per decade. So we’re, you can get somewhat reliable 10-day forecast, you know, that’s why you see 10 days, you know, a couple of decades ago, five days out was the best you were going to get, you know, and so they were trying to figure out, can we predict how far out can you predict that’s actually somewhat actionable? There’s that, and then, so that’s that.

The other thing I wanted to talk about, we’ve got roughly five minutes or so before we’ll shut it down, and a lot of this came out of what we hit last, at the end of the, end of your review thing. I’m definitely interested in what AI’s impact on law is. Last year we did laws impact on AI’s, we actually want to flip-flop it.

I’m looking to get Andrew to come back and talk along with whoever he wants to bring.

That should be pretty interesting. One of the things I want to cover at the job I’m on right now, we actually have a generative AI piece that’s plugged into VS Code for certain kinds of code, and I can use that. That’s pretty interesting, but I was wanting to take a look across I know Charlie used Cursor. I played around with a couple of different plugins to Visual Studio Code and stuff. I thought about just doing a whirlwind tour of what code gen AI theme can you play with now, and how do they work, what’s it like. I thought about doing that, kind of passing off between like three or four of us to go see. I believe Copilot’s free now for some amount of time if you’re on GitHub, so I’ll probably play with that one too.

They’re like next tier up, which is only $10 a month.

You get a lot of the same features as you do for Cursor.

You also get access to both Claude and to GPT-4 models as well with the VS Code directly built in.

I would like to see, I don’t know if Michael’s on.

I had a conversation with Michael about what they’re using right now at Google, and it was pretty eye-opening as far as what’s, of course, they apparently have their entire toolset. You use their editor with their, you know, I mean it’s, I kind of see the indoctrination thing happening, but apparently it’s, from what I get from him, it’s well above intern level of, you know, maybe two or three years into your career kind of, you know, level one, level two software developer from what they are getting out of there. You know, you just describe what you want and the classes and then goes and doesn’t.

It’s at the same level that you would see out of a, you know, a one or a two-year developer from his perspective.

I don’t know if he can share.

So finding out what would be available, that might be fun. There’s a piece that Jacqueline had dropped, and I still haven’t made it all the way through this, because there’s quite a few, I don’t know what the count is.

There’s a federal AI use case inventory that is posted out on GitHub, where they have surveyed a lot of the federal agencies and said, what use cases do you have where you think AI or some kind of tool would be helpful, and can you tell us kind of what that context is? Because there’s a massive amount of data and now when they started I think in 2023 and they updated for 2024, so we may want to cover that. Josh had mentioned going through a kind of like a paper review, maybe spinning up a series would probably do that in addition to this beta, so it might be a separate thing. I’m guessing mostly Zoom, or you think all virtual?

Yeah, you probably do mostly virtual. You’re gonna be looking at paper anyway. Unless there’s something special we wanted to do. Right, and so I’ll probably put out a survey or something to see where, what time frame, you know, it’s probably related to to when Josh actually would like to, I am painting this on you a little bit, what availability, because yeah we’ve done it before, basically I can’t remember the name of the guy that used to do paper reviews and post them and we would we would actually just walk through what he had done.

I can’t remember the name of the guy.

Routine, he was kind of famous and then he made some crude comments on Twitter or something and he got un-famous real quick. That’s the arc. Yeah, you don’t have to do that part, I don’t know. So anyway, we’ll do that. There, I am interested in either seeing if I can get somebody from the AI, I’ve actually briefed the AIs on Task Force, I’d like for them to come brief us. You know, what are you doing in town, what do you got going on, how do we get involved, things like that. So they are making it available for folks if you want to, if you want to join that task force and get, I’m still, they’re still trying to figure out, you know, they’re kind of, they’re kind of forming it in the same way that Cyber Huntsville runs, where you have a membership, you have meetings, and you got committees, yeah all the stuff. Oh, I’d be happy if they could just get a website stood up that I could point people at when they come to mind asking for AI Huntsville. I’m like, I’m not that, that’s a city or thing that I’d love it if they had a website that I could, you know, reference. We had some other discussions about how do you, initially it was, how do I know if this thing is giving me true info or not, and I think I’d rather take a step back and say, how do I evaluate a model to know if it’s doing what I think it should be doing. Oh, and then, because that’s a little different than how to evaluate a model to know if it’s doing what you think it should be doing. So, we may work through that, and then I’m happy looking for some guest speakers to come in and talk about stuff, especially if you have good stuff that you talked about before, or if you’ve got a side project.

If you’re interested but don’t really haven’t done anything like a talk before, let me know. I can set stuff up and I can do, you know, I mean, I can, I can help, especially if it’s, sometimes an hour seems a little daunting. If you’re not doing this a decent amount, so here’s that, and then we may have some, maybe some hot topics where we just do a survey, bring a topic, you know, you get three minutes to talk about it.

You know, it’s got to be somewhat AI adjacent or whatever, and we’ll do that.

Then also looking, our first social of the year, I think in March.

We’re not sure when, but it’s too cold right now to really go outside.

We usually do them at Stovehouse, where you can get a drink, hang out, get your choice of a bunch of different foods. We, normally they don’t have a live band playing on Wednesdays, because it’s really hard to have like a meetup outside while there’s blaring music all over the place, but let’s probably look at that. So, if you got thoughts on that, let me know. All right, let me check the chat, and then we will, wow, we got a bunch of folks online. So, for those online, hi, welcome.

Let me see if I can figure out how to get to the chat.

All right. Yes, so I did make a sign to get in. I forgot to bring tape to put it up, but luckily I had a bunch of these name tags. So, you name tags, stick it up on the door, that worked. Oh, NSF. Dr. Lassert here, he normally tracks NSF type drops.

You’ll find the SBIR topics for NASA, NSF, Department of Energy, some of the others are on some other system, and they drop at a different rate.

NSF, I know, is a lot more kind of open year-round if you’ve got a nice thing and it fits within what they’re looking for.

But yeah, I will, if I can find that out, I’ll let you know.

I mentioned that one, I think. Okay, yeah, it looks like for NSF, they have three deadlines for this year, March 5th, July 2nd, and November 5th.

I can’t find if they’ve already put the ones for March 5th in pre-release or not.

Scores, yeah.

Does NSF have named topics that they’re soliciting, or do you have to solicit your… I believe they have named topics. They do have that year-round thing too. Three a year or something like that. They have a VAA, which is a broad agency announcement that they do once a year, and if you want, that’s kind of a bring your own topic. If you’ve got something that would fit with them and they’ll, you know, and you can kind of propose at will going that route. I don’t know this personally, but I’ve heard that it helps if you actually have somebody on the other side that’s waiting to receive your topic and can help pull while you push. Oh, that tends to help. And I believe that’s it.

Well, the person in Zoom who was asking about the ARVR topic, they’re referring to one you didn’t cover. Okay. AF251D025.

AF251D0.

It could be that filtered.

It’s just on OBS Center.

Hey, this should be all open.

If it comes back, don’t fail me now.

One thing I didn’t know about that one was, do you see what service it was that let that? Because I couldn’t see it on my… It doesn’t format very well on a smartphone, unfortunately.

It doesn’t format very well on any kind of device. It was AF251D0.

Beautiful.

I need to go grab that one. Am I still on here? Yeah, still here.

Okay. I’m sorry.

They are taking the site down for maintenance. I saw the thing yesterday. There was a… It’s seven o’clock, so they posted something with that. Wow. I do have it still up. The name of it is Spatial Tracking for Enhanced Automated Manufacturing Extended Reality, D-XR. So, that’s cool.

Yeah, but it is AF251D025.

What is working for you?

Well, I haven’t refreshed the page. I just left it open. I’m not refreshing it because I’m the one that’s stopping their maintenance right now. I can’t let that guy go home. I dare you. All right, I’m going to stop the recording.