AI without Python

AI without Python?

Recording of Huntsville AI Meetup

Transcription provided by Huntsville AI Transcribe

So let’s jump right into it.

So welcome Alex, talking about data science outside of Python. Hi, so I’ll front here. So a lot of the stems out of, I do a lot of data aggregation work at Northrop and Northrop has kind of completed all the private packages. So that’s, and most of our tooling is in Java. So I can either, if that doesn’t work, you can do it.

That might be the other one. I’ll just do this. Or you can point that at me and I’ll point for sure. So a lot of our systems are real time systems.

And we use a certain messaging tool that does not support Python.

So that’s also not a big problem in all this is that when you’re wanting to classify things directly off the network, you don’t really want to have all the intermediary steps.

And I’ve also had challenges of just moving private packages where I can easily do a main dependency get and all the time needs my work laptop and go straight on to my development box and I can start working through that.

Another thing is a lot of the integration patterns found that they’re going to work a lot better and see sharp of the job as they are going to Python. So if you need to just plug in one framework to another framework, it works very well. So I work a lot in visualization and collection.

So primarily dealing with GDS messaging and a lot of making charts on both ends.

And I have been able to do some classification utilizing a tool called Tribune. And from there, I’ve been exploring other tools and kind of been trying to figure out the whole ecosystem of okay, I just want this stuff to work. And so you try out a bunch of different things. You know, I’m good for operating systems.

So you got a Windows laptop, you know, a rel, workstation. Everything works one-to-one. So things I’ve enjoyed about the job is the tooling is a lot better than I have. Just package management, everything around that section. I found that it’s really easy to use what’s set up to compile everything, having files like checks, and knowing that I can actually work inside real-time environments. So that’s a big change that’s happened in Java in the past couple of years is that there’s been an introduction to low-latency garbage collection and the nanosecond scale type that you can still account for actual 18 milliseconds of uninterrupted work page.

So if you’re always a two-mouse-second-stitute, you still know that you’re working on a hardware-based ecosystem.

Java is the language, or at least the original language of big data.

So back in the early days of the mid-2000s, Activity Systems was not the first system to support a terabyte of RAM, which happened to be used at a power-or-i-t. So, but those systems are expensive, you’re not going to develop on them. So you’re going to develop locally on your Windows box, one of them, so your main answer is just Java. And I really enjoy the whole concept of a platform like Gossip, and that is something that I found other libraries is that they are not.

So Java actually has a lot of acting from other large companies.

So my favorite out of all of these is Tribute, which, even though it’s essentially end-of-life, is quite simple to work with as I’m trying to show with training a simple IRIS dataset.

And it’s not that far off from Scikit. Yes, having to specify all the types is a little annoying sometimes, but it’s not that your intelligence can generally pick up everything that you want, and you do it enough times it figures it out that that’s what you actually want.

TensorFlow Java has been painstakingly annoying to work with, one being TensorFlow, but you also lose a lot of configuration in a different set of TensorFlow.

You also have to be annoying to think that Google only recently moved to Java 11, so there’s still a lot of things between Google and Java 11 and Java 8 systems.

So I found that TensorFlow Java from the Java 8 version works fine. It doesn’t use ADX on the box.

You have to actually go in, unpack the jar, change some config variables inside the jar, repack the jar, then recompile it into your system, and then it will work.

But you can automate that would be.

How did you figure that out?

I mean, that’s not something normally I’ve found it.

Okay, probably you guys probably thought this was something using ADX. Went in and started to dive in my way through. So I look in the JNI library through a Java platform, make sure that it’s hitting everything.

And then you just start going to the TensorFlow source code.

Okay.

That’s got to be fine. Which is ADX, but I’ve never seen any, I’ve never seen any computer in Java server in the same sense.

How long?

Is there any iterations for the computer to start with a TensorFlow Java?

Well, I mean, it is using the ABI layer, and that’s what’s connecting Java and TensorFlow.

And then you compile in all the shared libraries, you put that into the jar.

And then you have to actually add the source path for the shared logic libraries as an environment variable inside your Java file.

And then you put it up as a static port, and then it works.

We did that.

Where did we do that?

Worldwind.

Using the open GL layer inside the Worldwind.

Max, globe.

I didn’t deal with the JS side.

I set it up in a couple of different places. That was, yeah, when Kudas at the time, it was the live GL stuff that we had to load specifically. That’s when we’re doing the Java doing the load library path on Linux is different than setting up your load path on.

That’s when RTI actually, I know this.

Okay.

Wow.

Sorry, cool. Is Java or Java script for it’s a point, it’s version 17? Java’s on 21, I’m looking at 22 features.

Okay.

I’m on Java 8.

I’m on Java 8. I’m on Java 8. It’s like a different little directory for the same version of the old Python.

Java has moved a lot since 8.

I know.

You’re good 30% faster.

There’s a lot more in transits working behind the scenes. So, primarily to see what compilers were getting better. They, well, this will be more stuff for the next couple of slides, but a lot of the integration tools have kind of been improving with Java. So, deep learning for J, unfortunately, it still calls Python.

So, as much as you like, you manage language calling, which is called Cico, it does that.

And sometimes it’s not.

But TensorFlow Java, as long as you stay on the Java 8 version, you’re fine.

I haven’t noticed any big memory issues there.

I can’t get it to use CUDA or call it an open CL.

Deep learning Java, I haven’t had a chance to use it. I just noticed that Amazon actually published a library and is actively maintaining it. Narika is a new one that I found out this week. It was written by someone that really wanted Indie arrays in Java. So, he wound up writing up this library that’s actually relatively easy to use and has full open CL binding stand can do all the things that we want inside it. Cool. And follows the spec of being available on every single Java target.

Which TensorFlow Java, only for some X86.

And that’s not really to the ethos of Java as the whole red widespread.

So, things with installing a Python, to do the ethos, it’s kind of magical with everything it does, which is driven by data scientists and AI researchers like it. They’re not really software engineers, which point to their one specific thing, it’s studying this one domain but not worrying about all the tooling. They can take some things with that.

And I would say, they’re not all centered around the same API’s yet.

I have no sense of trying to start going for the single setup API’s that are all really similar to each other.

And I really don’t like having to lie on platform dependent libraries.

We still have, so does have a full mempy library.

But everything’s being set in motion for Java to have a nonpy library.

Java is getting a wonderful API for actually dealing with Cinti final. This has a scan for Java right now.

You get Cinti optimization if you’re working with any type.

For example, as a plus there is no Cinti optimization.

So, they are working on a platform agnostic graceful fail API.

So that way, if you run this on x86 or R, it will work.

And then we’ll try to go to the highest level Cinti first and then walk itself back.

But if you run up to risk five or there is no Cinti support, it’ll just recently fall back to skilletback. There’s also a brand new, safe, more rust style CDI.

So this cuts the latency by a third from JNI to C and handles all the memory pointers for you.

So you can actually track memory pointers now from Java to Cinti back.

And this is a Java 22 feature.

So when we all get Java 22, I work, we can actually start in Cinti.

I’m into L8.

8 to 7 years old, 22 comes out of 1.

I know.

And the other big thing in platform agnostic computing is WebGQ.

So WebGQ is a universal graphics API that sits on top of all kind of metal and direct X with a nice rust like shatter language.

And this is an example of actually, it’s a top.

It’s really, it’s a lot nicer to work with than GLS.

Java doesn’t have an official WebGQ, but there are other frameworks for working with C++ and Google is investing heavily in WebGQ, such as they are rewriting all of the 2D Chrome tools in WebGQ.

They are rewriting TensorFlow.js to WebGQ.

They are even creating a on-it runtime that runs in WebAssembly with WebGQ. So it is meant as a complete platform agnostic graphics API.

So kind of telling what the thing GL does, but a lot of performance and a lot more controlled.

And a lot more work for computers.

The downside of WebGQ is you still can’t use TensorFlow on video GPUs, which I know is big, but the space is getting there, but it is freeing up your applications for running exclusively on CUDA systems.

So the victim with WebGQ is written in REST.

REST is a type safe language and is now even fully automotive and aerospace certified with a specific compiler, following both ISO standards. So you have all the data safety tools in there.

But we can actually apply that to running WebGQ.

And there are some actual use cases for AI with WebGQ, such as BIRT.

BIRT is a more minimal AI library.

But it’s the idea that’s built to compile anywhere.

To compile for every single target possible in REST.

If it supports WebGQ, it runs.

Candle, I haven’t had a chance to look for, look at, but it’s by hugging face.

And I trust hugging face is going to do a good job. Now, Linfa is actually a lot more useful as it’s at least for me since I’m on the classification side, but it is REST scikit. So if you need more basic AI or data science tools, you have Linfa.

It’s really fast out of the box.

The bigger problem is that it’s in REST, which not a lot of data scientists are going to want to work with. Was it like where you find like linear regression or a decision tree framework? I do think it’s important for transformers now too.

Okay. So looking at the future of more platform-agnostic computing, do you think that there really needs to be a lot of work on deployment techniques that we use?

So TensorFlow, even though I can build it from source, still relies on FASAL, which not every group has approved.

I’ve also found FASAL a complete thing to learn.

Yes.

I mentioned we’re all basal.

Yes.

But with REST, that really helps open up edge computing. So you might have specific APIs that you would like to write quickly for new hardware.

And you will have REST for actually running it on your more automotive or mobile systems.

And I do think that at least on the Java side, even if you’re still running large servers in a mobile environment, everything works. The nice thing with Java and the general need is you can compile all your tools into one single jar. That’s all you have to do is shift and run.

So you can actually run your full web server, data-based, now machine learning tools, your application server, all of them. Instead of having to distribute out everything. And I do think this really goes into more per node computing that I’ve been trying to read up on that IBM has been looking at as mainly for doing more transactional verification.

Is that they’ll want to run a, essentially a local AI server on every single mainframe and use that for actually verifying transactions.

Is that it?

Is there anything I might have?

Probably not the future. That’s it. The future is the future. Yeah, the future is the future. So I do think that WebGPU really is the future. They are working on more of the actual partner tensor specific API. We’re trying to extract that out.

But in the video, it can be very hard to work with on that front because having CUDA is a monopoly for them. Yeah, it works. It works. It works. But if you have an IndeedGPUs, it works fine.

You might not have any QRR or CEM tensor calculations, but it also means that you can build your model on your integrated graphics, design it up, and then you can ship it to test, which is a big thing.

Have you seen, I know you’ve worked a lot with other front-line data science frameworks like Spark and some others.

Have you seen any type of migration of things that maybe you were just developed under Spark actually get kind of pulled over into Java proper as part of the actual runtime?

Spark is used, but I think the tooling is starting to appear to just run this natively in Java.

Because when Spark was working, Java didn’t even have render functions.

So Spark was working back in Java 6.

Right.

So you had Scala doing a lot of heavy lifting for Java.

But now that Scala is kind of declining out.

Also, Padoop is fading out.

But the big thing with Spark now is you’re starting to get more Rust-based platform to do the whole server-side integration.

You just communicate with a Yadi call to the Rust system, and that will do all your computing for you and then you’ll be your response back.

Okay. So the big one there is DataFusion.

Okay. And that is looking to be a complete Spark replacement.

Right now we’re using high Spark on the front end.

Basically you can write everything.

Python converts it into a jar, pushes it over to where your Spark workers are running, does your heavy calculation stuff, hashing back to stitch it back together, showing it back.

I know it’s been interesting. There was a later version of Spark that actually has the data, and of course both are DataFrames.

I got it.

And it’s DataFrames in Python.

I had DataFrames in Spark, which I written in Java.

And I always would name them my party DataFrame and my Java Data, whatever, just to keep the naming condition because I’m working to track pullers.

Pullers?

Pullers. It’s a Rust-based Canvas replacement.

Okay. What I’m going to add to the Spark actually, the later version, is API compatible.

This is where I have to push the magic button that doesn’t look like a button.

Same one.

Again, thanks to us now for your inspiration. It’s still a jar of code.

With the phone lights.

So the API for your DataFrame is now compatible with the normal Pandas API.

So you don’t have to care that I’m going to Pandas DataFrame versus a Spark DataFrame.

It’ll be really, really interesting from a Rust standpoint if they wind up… They use Arrow.

Arrow.

Okay.

As long as your data is in Arrow format, which is supported under pretty much every single language now, right?

We’re good.

Okay. That’s it.

The video just came out with Pandas acceleration. Really? Yeah. Something attached with Quaster of Oracle to further improve it. That’s an interesting one. Now NumPy does have its downsides with Pandas is that now with caches getting enlarged, but the DataFrames are still massive.

There’s no straight forward way to feed them through the cache.

So you’re still in cache mishandling. So you are leaving a lot of performance on the table with Pandas.

But, Paul just does try to solve that.

That’s pretty cool. I think the Rust thing is probably more… I feel like it’d be more like the catch on the Java side, even though it is somewhat from a machine learning data science.

I was going to think more of this as a deployment side.

Yeah. Okay.

So you’re like… I shared on the Discord server that OpenEat actually has full spring integration now.

But you’re going to want to just run your model locally inside of your spring application.

But ultimately, every man faced with any kind of application server is just a message queue.

So as long as you can make it look like a message queue, you’re fine.

You’ve got to do this just to get all your deployment set up. I think you hit first up on the data sciences standpoint, you’re either working in Python a lot, you might be working from the hard background, maybe from a math lab background. I can see you may argue as long as the Rust part is set up easily with some kind of easy front end, I think that might be more powerful. I mean, it seems like the two languages are just so… I mean, it’s almost like two domains are so different.

So yeah, I think the deployment side Java and Rust are going to be a much better route.

But I do think there’s a lot of tooling that’s also focused more on not just data scientists, but like, oh, we’ve been able to make this model.

How do we get this working? Right.

And the actual production system.

Well, I’m going to want to make sure I have all the Java or C sharp tooling there that can run this and have my application control how it runs.

So you think I can see something like onyx being the transition, like you said, error, as long as you have to get it into error, this happens from a model standpoint, as long as I can get it into an onyx package, then I can basically deploy it anywhere you got like this kind of setup at the initial.

And I have a whole big web assembly thing.

And then you want to actually be able to run onyx and web assembly.

Well, Google’s working on that. They’re pushing, I’m guessing they’re doing that mostly for mobile, but I don’t know that for sure. Well, I think part of its web, I think, but part of it’s also web assembly. Yep.

There is a lot of time for web assembly. Get it in there. Yeah. It’s pretty interesting.

Right. Well, that’s what I had a question for you. Any other languages that we’ve ever seen?

I wonder about like 10 years ago or so, like there was this big discussion about our versus Python, right?

Our most. That’s what I’m gathering, right?

I’m feeling and mentioning today.

For, you still say some are used for my statistical background, especially if you’re looking at things like quality control, trying to compute some, you know, what the tail is.

I mean, it’s.

I would like to, because I can actually see what the hell is going on. Right. I want to really introduce Python.

Because you’ve been doing some, some of that experience. Right. Yeah. I’ve seen, what was it?

First video.

Our studio. Yeah. For a while. Yeah. First video.

Yeah. For the specific thing that I needed it for, it did the job well. But you still had to know more. Yeah. Intimately to get it done. Yeah. I think that was a catch. It wasn’t very memorable. I think, yeah. I think it’s really hard to know. Yeah. Being able to share, run, methods of code to do small things and really, really, it wasn’t necessarily, I think that’s the kind of thing.

And what people learn. What’s that?

I mean, kind of like back in the day, when college Microsoft would give you a little studio license. Yeah.

They wanted you to learn visual studio. Yeah. You got to a paid job. That’s a micro. So you have a company that could afford a license.

Because I say that you’re not affording a license. Much less if you need that stack of MSDN disks. Yeah. I was really baked my little about our basic data type, just the data frame works and is very readable, but it slows down.

Actually run really fast. You have to rewrite it like C. And it’s, it’s, if you ever go in between visual basic and see, it’s just like read it. You think completely differently. Right. You’re doing it in the same interface with same stuff. It’s like, okay, do the thing. Yeah. I just, I mean, we hit a lot of CPP a while back. That was fairly straightforward.

But again, that’s just a one off read. Right. Oh, the one off directly in the C++. It’s not like that’s a framework that you can.

Oh, now I want to do a train. So, you know, it’s, it’s a kind of a well thing. Does it require root to install for setup?

No.

Okay.

No, you just combine it. It’s weird. It just, it, what do you run it? And is there a trust environment?

You know what?

What do you run it?

And is there a trust environment?

It’s not hard to go evaluate all the, it’s not like, oh, but yeah, I mean, you could pull it run it.

I don’t, I don’t think it would be the biggest thing I have depends on who your. Person is that approves things. If they go look at who wrote it and where is that?

It’s like the way I don’t know.

That might come in play. Okay. So, if that’s the case, it’s small enough that you could actually pull it in and have a set of folks that aren’t trusted go review it.

And then it is provided source code.

So it’s not like it’s just a source code.

Yeah. It’s fast. So right now, from the small model from whisper using the Python, whatever the pieces. I write my own text. I don’t know what I’m doing. I’m just reading it. So I’m just going to go. Using the Python or whatever the pieces. I right now I get on a machine. I need 16 gig words of RAM to run that off.

Running whisper.

The C plus plus rewrite. I need about eight megabytes, something like that.

Yeah.

Megabytes. Megabytes. Yes.

I mean, it’s like, I’ve run it through several of the, I mean, from the, the transcription stuff that I’ve got from one’s way I, I’m running through some of the same, like all these videos and stuff. I transcribe those as well. So if you actually go look at whatever I mean, it’s, here’s the transcript of it.

Yeah.

Let me show you.

I’m honest. I think of course, we’re at the same person. Right. So you need like eight megabytes and it’s running on a CPU.

Yes.

It’s a lot from last week. Here’s the actual transcript. Everything we talked about. It did have some problem with mixture of experts. Used in this context is different from the training of natural language. You don’t, I mean, there’s a couple of things where it got mixture of experts wrong. Of course it is Josh with a southern accent like me. It’s kind of fun. But now that was for CPP was quick. Also looked at my kind of record on top of which was for CPP. And that was kind of interesting. All it is just pass through the, I mean, it’s a, it calls the same function to see for plus that playable.

Somebody probably took 10 minutes to do that.

It’s been pretty cool. I haven’t seen any, I would have expected a lot of.net isms from Microsoft. You know, I expect to see a C sharp library that does the same thing or.

They have ML.net.

Yeah. That’s a big one. But they only really have one. Didn’t they have another one? Like shuttered.

Like a library in Python.

I can’t think. I mean, they filled as many things to do with us from a, from a library. I mean, I think we tried it. It didn’t work. We’re going to leave that one over here. You know, I’m trying to think of what other languages.

I mean, of course, the office script, you get the TensorFlow.js.

You got a couple of other options there.

If you want to run straight.

No, I mean, going through the iris. So let me go back to, I think your slides and that early.

Yes. That’s trivia. It doesn’t have full onyx integration.

So you can run onyx models.

Okay.

It’s funny.

Yes. There are builders for building out the trainer, but just getting a basic trainer.

That’s all you need. Okay. The other thing I was thinking about is one of the hardest part to rolling out that transcription service thing with actually getting it hooked into Flask and getting served out and all that kind of stuff. Whereas on Java, I’ve got all kinds of different libraries that are hammered on all the time with, you know, performance and all that kind of stuff serving up applications. And it’s what they call it an application server.

You know, you think doing the same in the models or what not.

So if I had that approach here, it’d be really easy to back deploy it.

Is it a model?

It can be.

Deploy it all that kind of stuff. We can turn it into a sprint project and get it deployed. Right now I’ve got it in a Docker container using Amazon Fargate that just spins it up in the container when I call it. The spin up guide is in minutes. Which is the hard part.

The reason I was looking at whisper CPD, I might actually be able to run that and it’ll end up directly.

Yeah. The reason I needed the container was to get the ramp. Yes, you couldn’t get that much. You only get like 10 gig. It was a weird number for around on land. Like 10 or 12 gig. Like we picked that number. It’s not a multiple of four, I guess.

It’s not posted.

So we got that.

Set up a point.

I guess I’ll post it on your site. Okay.

But yeah, that’s really cool stuff. How hard was it to learn this, you know, Triton? Triton was the easiest. Okay.

I learned about it from a Reddit post.

And I was like, Oh, this is really easy. And then check for all the effort calls. From there.

Validate.

Look at all the share library imports.

Really, they’re all in Onyx and TensorFlow.

And I burn the disk.

Okay.

That’s pretty nice.

I’m not sure what the systems I’m interacting with one particular side of the house does not allow any type of scripting language at all.

There’s not even a Python interpreter installment.

That makes sense. Nor a compiler, nor a, you know, you’re limited with what even shell you use and what is available from the shell. What shell do they want you to? I don’t know. It’s, I don’t think it’s batch.

This is a C shell. C shell.

Yeah. I’m going to log in. I can’t even, you know, most of the normal command keys are fine. And I try to set or whatever and I’m like, okay, well, I don’t even remember how to do this now.

Do I even module anyone to Java sign?

Or is that too long?

I don’t even do it on the C++ side. I don’t know about the Java side.

I probably do it on Java as long as it was, without any loading of jars or dynamic loading of generating Java code.

And then compiling it and probably get it’s come all the shade.

And it’s still right. I don’t know that they will let me have the compiler part of that.

We ran into a similar issue.

We had a library that was doing, they were using the GPU, but for some kind of a calculation thing that they were using actually, this was old school stuff. They were using web shaders. And they weren’t provided to us in a compiler form. And this thing was actually taking web shaders, web shading language and all and compiling it on the garbage. Oh, and try to run it. And it goes to file.

Nope. Hey guys, you all have never worked in this lab before, have you? No. Okay.

But I can get, that was why we went with looking at Spark in the first place.

What is its job?

I can get it in there.

It’s compiled. We can run it, deploy it, do all the things we need to do. And it’s fast enough.

You know, it’s not that it’s hyper speed faster than anything else, but it does what we need it to do.

And it’s not incredibly, it’s not slow.

No, I’ve been curious about trying to get it in front inside of a hard real time, one hard real time cycle. Okay. So if you can do it on your 20 milliseconds, we’ll know the network calls account for that would be an interesting problem to explore.

Right.

You just mentioned C short.

Yeah. I’m not a little bit.

I’m so safe. I’m going to look at it. I can buy this. They’re only the same. Oh, we bought it. Yes. Oh, I mean, for my actual.

Really should buy that to me. I just said it. Okay. I’m the sponsor. Not sponsored. Okay.

I want to.

That’s it.

I want to. I want to tell you, but I mean, I’m not a big fan of the C++ code, but that’s a thing in the system.

Yeah. I’m trying to push the auto model builder.

Which is like, it seems to be an ML.net graphical tool that runs in VS code and not just her BS studio. Visual studio basic. Regular.

This is a track number way back in the day when C sharp first come out, we were talking to them.

I don’t know if it was ECMA.

I can’t remember the name of the actual binary code they used.

And the interesting thing there, we’ve been working with Java for a while.

And we basically switched over to some C sharp.

It’s a dot net stuff.

And they’re like, what? And the Java is.net is actually an open standard.

They actually can release the byte code structure, single and everything through, I remember which there’s some OSI or some standards body.

Whereas Java initially you had to go figure out what you couldn’t write code that you couldn’t write things that went to byte code without going through some Java priority.

They wouldn’t put out what the definition of byte code means.

They had that.

Yes. Well, that was kind of interesting to go. We’re talking Microsoft and I’m thinking, okay, this is the but even if this is the one that I don’t trust all this. Actually, we have been sourcing it two years ago. Here’s the, here’s the spec.

That was really, really interesting.

Initially there C sharp.

It was a managed C++ library that was for this.

Think of C++ for garbage collection.

And where you think you can do things which are actually a layer or a load or a load actually, but that was weird.

But.

Probably the good thing about this, whatever you’re doing is most likely available in a drawer or wherever you might be working something in Office 365 or something like that.

So, I mean, they mentioned fiber BI and some of the other stuff. So as they start getting AI type things more and more applications. The only thing is the last blog post that I see from them is in 2022.

This is because someone.net. So I wonder if they’re really investing in. They might have moved it to, they might, they may have moved it from here back up into some kind of a, because when you have things like open AI that’s flashed out and everything, now everything’s an API call through their service, stuff. I’ve seen others that used to provide like libraries to be down with a new, now the next version is well it’s the same API now we hosted. And you pay for it.

And you’re paying for it. That might be what is delayed until open AI turn around. But they do their price.

I mean, they seriously bet some prices. So, by last week. They’re open AI debt base or whatever it was they announced a lot of things there with price debts, what not. Next came, oh, yeah. Yeah. Yeah. That’s a competitive. Well, see how it works.

Open AI and go through some language when they, I don’t, when they threw chat, I don’t think they had their woes coming.

Everybody was going, this is awesome. And then all of a sudden you’re hosting how many users on something that’s that expensive to run. You can all of us here go, oh, it’s like my first cloud build comes in.

I’m surprised they cut prices in place.

That’s, yeah. I see all the one off competitive.

I mean, I don’t know.

I mean, it already had that much competition.

I mean, you can think of like, I know it’s not the same quality like open source models right now.

I’m talking to buy that then, right?

Yeah. Because it’s expensive to be paying for this subscription. If you’re an enter product. Right.

Well, I think a lot of the open source stuff got on where it’s good now. Yeah, exactly. It’s not as good. It works for my scenario. I didn’t show up in a Porsche.

Yeah, exactly. I showed it up in a Chevrolet. The problem with open source stuff is you can’t technically use it for products in a lot of cases. True. Yeah. Because you’re re-routing in terms of service. You’re re-routing as a big horror before you do that. Yeah. And so for Falcon, I guess, and even that, because they were a factory license. I don’t know how well it’s, I don’t know if anybody can sue for that yet. Usually they’ll make statements like, yeah, the day is fine and it’s a factory license model, everything’s great, until somebody gets sued and then they figure out what if it’s really okay. I think that’s a long story where it was, some, it would be their, they had an open, yeah, or some other company that was doing, if you use their product, they will, functionally, look at it. It’s Microsoft. It’s Microsoft. If you use their co-pilots, if you use their co-pilots, they said, no, it’ll help you when they’re legal. Yeah. It’s nice. For their co-pilots, right? They named every app product they had co-pilots. Yeah. Yeah, like they named it, you can start off with that.

Yeah.

Yeah, but they had to send that to, they would pay something that’s gonna be, okay. Yeah. So it’s insurance.

Well, the thing is, they’ll pay your legal cost, that’s when you win. You won’t have to pay for your loss. I was gonna say that, that loss you got to go through is gonna be very interesting. Oh yeah. Yeah. That’s gonna, yeah, that’s gonna be a real one. We need Andrew to come back in and talk to us about that. That might be early next year after probably we can hold on.

We gotta be on the group, it’s actually a lawyer.

Those patent law and things like that.

And it was initially, I don’t know if people, what kind of engineer he was, and then he switched to be a lawyer.

Or Andrew, if you’ll listen to this later, we’re talking about you.

But yeah, we were, he could be somebody that could walk through the different licenses that are out there from all those, so those are really weird.

Especially ones that say you shall always update to the latest version.

And as long as it’s reasonable.

And then the next version is a paid version.

Stay with me. But you have to update because you’re contracting, you’re licensing, you have to update, it’s reasonable. So, from a pricing perspective, the part that I can’t quite understand is, Microsoft is giving a lot for free. Like if you install the Internet Explorer the other day, I was just kind of messing with the internet, you had some Python for me and whatnot. I mean, like, maybe zero. Just whatever the picture you can take is in the user.

Right.

They might give you this for free, but it’s probably all their expected revenue is always your integration. So like, they wrote or helped write the OpenAI Spring integration module. Well, you saw he’s open AI.

Yeah, what I’m saying is as an end user.

Well, I mean, I was using it as a kind of for the developer scenario, but just like the, because this is like something that every run of user can get from Internet Explorer. Generating images, right? All kinds of stuff.

These people are not, I mean, unless they are advertising that they’re able to spring code every once in a while, they just put it out.

It’s just, I bet they’re spending a ton of money just to market from Google right now, but it’s on the negative side. I don’t think that they’re making it. You might be seeing people going negative just to stay in the competition long enough.

Yeah, assuming that the price will go down in two years, three years, so point where.

Or compute will go down, like we saw, we’ve seen recently.

You’re probably going to have different quantization patterns that make these models again good enough.

We were doing a couple of weeks ago, we did a thing where I’m running a Lava 13B model on this laptop with 32 giga-round and zero graphics.

I mean, zero GPU.

That’s something that normally 13B and what 26 giga-round normally on a GPU around that?

No, I think 13B or more.

That’s probably a little more. It’s small.

I’m hoping that for my Gnostic, the game will help also reduce just cost.

Do you want to customize this thing for me?

Yeah, you can run on Andy. You can run on Intel’s GPUs. Maybe remove some of that lock-in. If you check out ShadeUp.dev, that will just show you how easy it is to just start using WebGPU.

ShadeUp.dev.

And click the creator.

This guy?

Yes.

The idea is that you just tie in with either a Firefox headless and you can click over on the creator and drag it around now.

That’s snappy.

Considering how high-poly that is, yes.

It’s snappy around the… not on the string.

Yeah. It’s snappy.

I was looking at the other one.

I think I have a left-leaf left on. Oh, yeah.

Well, I can’t look at that while I’m doing it.

That’s like a half a second behind the A-Wave.

So… I think we’re close to time.