Originally Broadcast: August 02, 2025
Rebroadcast: Jon Radoff (Beamable) welcomes Al Morris (Koii.network) for a deep dive into the next generation of the decentralized internet.
What happens when idle devices form a global supercomputer?
How do we build infrastructure that’s truly owned by everyone?
We’ll explore DePIN, the community-owned compute mesh, and how Koii is laying the foundation for the internet’s next layer.
Al Morris: Welcome back everybody. This is the decentralized tech live stream. I'm your host John
Jon Radoff: Radoff. I'm from a company called Beamable. We ourselves are building decentralized tech around game development and game servers. We're launching a deep in later this year around the beamable platform. We help people build back ends for their games. I'm accompanied here by Oscar, who's our producer for this stream. He helps keep it all on track. Keeps the camera on the right person gets the questions rolling in and that's a good reminder that if you're watching this in replay, then think about joining us live next time because you get to actually participate in this community and ask questions talk live. We can even bring it on camera if you want. We're a very open platform here. So I'm excited today because I'm joined by Al from Coin Network and they're doing really, really interesting work in decentralized computers. Their website says it building the world's biggest supercomputer is their aspirations. So I'm going to introduce Al. Al. Introduce yourself. Tell us a little bit about your background and what led you to
Al Morris: this and what are you building. Tell us about the supercomputer. Hey John, I'm looking to be here. We spent about four years working with a think tank out of Chicago. I helped start the think tank in like 2015, 2016. And then we spent a while educating people trying to understand how the space was growing. And it seemed like the natural fit was if we're going to have Web 3 really work and not just be a zero sum game for traders, then we need some kind of actual infrastructure to grow. So for the last year and a half, we've been going around handing out these little pins. This is a deep pin. As you can see, we've got a lot of things going on here. We've got a lot of things going on here. We've got a lot of things going on here. We've got a lot of things going on here. We've got a lot of things going on here. We've got a lot of things going on here. We've got a lot of things going on here. We've got a lot of things going on here. And then we need to work with FedEx, which was interesting to get through the industry. But also a kind of fun way to think about the ecosystem. Because really it comes down to actually building community. Right? So we handed out stickers for a long time we did the cold outreach to random people walking down the street. Now I go to a lot of conferences and events. And we try to make sure that we're meeting some of the most interesting people in the industry. I was just how we got in touch actually. I heard really good things, even more urban city syndrome. Yeah. Which is a really nice little event actually. At a lot of really good chats. Pretty much everybody that showed up, I think was really in some kind of very diverse way. Well, we've been working on a co-ays how to connect the world together. What we've noticed is most of the people that are out there that want to participate in this stuff don't barely have data center skills or they don't want to go and run a big server. So we've tried to work on giving you the platform that you can use on your own computer. And so with that, we create this kind of economic foundation for coordination and that leads us to all kinds of interesting applications. So just to kick things off, I'll do a live demo very quickly. Also, it looks like, which I hope it's really good.
Al Morris: It's a lot of work.
Al Morris: So this is this is co-ay here. I can go on a live stream. High risk. I love it. Yeah, you can do my background images here too. It's great. This is the this is the node that we provide though. So this is a whole bunch of different projects that each have their own protocol. And you see all the different tokens that I'm earning here. And you actually get to see the tokens take up in real time. It's pretty fun. And our whole idea with this is we wanted to support a decentralized ecosystem of projects that run on different people's computers. And there's a lot actually that you can do with your computers. So if you're not a software engineer and you don't think about this, you know, behind the scenes, the internet is actually just a bunch of computers. And they hold your images. They run a server so they can play video game. Everything has to run through a computer somewhere though, unless it's on your own computer. But that makes actual like, you know, connected systems pretty hard. So you can't really play like an MMO on your own computer because that would defeat the purpose, right? No multiplayer though. We've also in the last couple of months started doing some interesting games on top of this. So we have dangerous Dave, which is that old DOS emulator, which is pretty fun one. So you can actually play through. You play through whole levels of this. And since DOS games are kind of an offline thing, this is actually running from my coin out here. And so you can see dangerous Dave running and that just hosts the game and does like, you know, basic stuff like that. We've done a whole bunch of layers of this. We'll probably get into them later, but.
Jon Radoff: Just to understand what was happening in that demo though. So you're you're serving up a game. It's running on your computer up in your decentralized network. Is it essentially pixel streaming the video display? Like what what is that presentation layer? Where is that coming from?
Al Morris: Yes, this is actually running on a web browser here. And you can see it's running on local host 3017. So 30,000 17 is this port that we offer for any of the tasks to run on. And then actually the cool thing is if you're on my land network, you can also play this game. Which is a really neat part. So anybody on my home network can run through this and actually play the same game with me. And then what we've integrated on top of it is the ability to trigger crypto payout, somethings like that. This is more like a land gaming environment. We also do much larger projects where we actually host like entire applications. So if you a few of those are in here, Astro link is a pretty good one. It's like a data analytics platform. So their whole website is actually pulling in data from a coin notes and then presenting on the site. And so it's kind of like when you use a block explorer in the web three world, you're looking at a representation of a very large distributed database. And so we hold that database across all of these notes. And then when you look at one of the websites, you're actually pulling in the data directly from one of the notes.
Jon Radoff: Very cool. So if I want to set up a node, what do I do?
Al Morris: It takes like about two or three minutes. It's a pretty easy app to install. Once you install it, you just go over this ad tasks page and then you just need to pick some of the tasks that you want to run. And as you can see, it's a pretty big selection that we got pentagon games on here as well, which does some interesting gaming stuff that you buy. And then some trading bots and all kinds of other stuff that you can get into. This guy is pretty cool too, actually, Sono. They've got about a thousand nodes now that are basically gathering sound data from the internet. And then they use it to create and generate video game software soundtracks, which is pretty cool. So there's some really interesting angles there.
Jon Radoff: So that sounds like a generative AI application. So when we look at what you're running on the back end there, so with dangerous Dave, for example, I'm assuming that's more CPU based because it's kind of old style retro game. But to do that, you're also harnessing GPUs. So the compute network sounds very generalized in terms of the kind of compute you can provision. Am I getting that right?
Al Morris: Yeah, and it depends what you're doing with it as well. So Sono actually has a couple of different tasks on here. And they require different things. So I think one of these ones maybe this one requires a GPU. And you can see it's got a few less nodes. And then there's other ones like this one here is actually crawling the internet using your computer and gathers a whole bunch of sound data. And then uses that to create their database. And so you can kind of like sequence these things. A lot of our team are from like the industrial engineering background. So we think of these things as data pipelines and then we try to orchestrate really low cost or zero cost data pipelines. Because a community of people who have devices can do things basically for free. And then you can create some some native token value very quickly, which is kind of the whole, I think the whole idea behind the defense base.
Jon Radoff: Let's talk about the economics of that a little bit more. So what is the economics of this like for both the people who can supply compute and then the people who might want to build software.
Al Morris: Yeah, so most of the tasks have their own internal economics. I can pull up a image that might help with this a little bit. The key thing with Cois is this idea that we call gradual consensus. So gradual consensus means you have iterative cycles of work that are happening. And so you always have a work cycle followed by an audit cycle and then a reward cycle. And the whole point of this is that the work that's being done basically mandates that a token gets given out to represent that value that's created within the community. And then the total token supply represents the value that has been created. And so if you do these correctly and then you have some kind of value that's held by the community, then if somebody wants to tap into that like we mentioned referee dream, they're holding a huge database of sound bites. If you want to tap into that, you then have to pay with their token. And so it creates this flywheel effect where as they get more data, then they drive more customer purchasing that means the token then has more value. You can kind of think of this like having a whole bunch of different gift cards for different stores walk down the street and you're like, okay, I got my Starbucks gift card for Starbucks. And then like Starbucks employees would get paid in a Starbucks gift card that they can sell on you know eBay to other people who then want to come to Starbucks. Very loose analogy probably will not have to your employees in these tokens, but when it's an AI agent that's doing the work, it's a little bit clear.
Jon Radoff: So I just want to probe that a little bit more and make sure I understand so it sounds like there's actually different tokens almost at the software layer says people are building applications there. Getting a token associated with their kind of sub project existing within Koi and so the incentives to get usage on your application.
Al Morris: Yeah, it's exactly it. And it also is a really good way to bootstrap some of the cost basis of starting any project because nowadays you can five code like a big app pretty quickly. But actually deploying something for millions of people is really hard. So with this, what you're able to do is you can write the code with five coding and then you can design some token economics around it. And then as your community grows, your node operator community can grow in parallel and you end up with this kind of system where this very low fixed costs. So it allows you to really build things that are quite easy to bootstrap. People might not realize this as well, but if you look at an app like Snapchat, like I think Snapchat spends like $3 billion per quarter on hosting, which is crazy. But it's really hard to imagine that. And so like, you've got like a bunch of teenagers sending pictures of their feet to each other. And that's, you know, they want that disappearing message. But then Snapchat on the back end has to have all these servers running so that's a fast snap the experience. And actually is I think what leads to a lot of the consolidation in this industry. So you look at like WhatsApp or Instagram getting purchased by Meta, a large part of why that happens, I think is that they ballooning costs, right. And they may not be able to monetize them themselves. So they end up having to sell your data or there's some kind of other thing that happens on the back end to cover that cost. And so we think that deep in is kind of the key solution to making not only that word, but also making web three work. I mentioned this before, but like web three is very much a zero sum game if all you do is trade tokens because it means that in order for you to make money, somebody else has to lose money. And that's not really good for anybody. And so what you kind of want to do, I think with these tokenomics implementations is you want to have tokens where the inflow and outflow of tokens is controlled by some kind of a protocol. And as long as the creation of tokens is controlled by that protocol, then you know that the token is only created when there's value being given out to people. And then you can create that flywheel effect. And there's been some pretty successful implementations of this. Good example might be like file coin or something where they're storing like exit bytes of data. And they've been around for probably about as long as I've been in the space, I think they came out in 2016. And so you can see a lot of these examples that have grown up the last like couple decades down.
Jon Radoff: And sort of things I always like to explore with projects like yours is the disruptive potential that they have because there's a lot of talk, for example, of deep in is capable of lowering lowering costs, but it really isn't just about lowering costs about democratizing in a lot of cases and what that ends up meaning is that applications that could not have easily existed before can get invented. I'm curious what you're seeing either now or what you're hoping you'll continue to see in terms of the software, the things people are going to be able to build with coin.
Al Morris: Yeah, so I think there's definitely two camps on this. We talk about reducing costs a lot because I think that's like the most that's the most easy financial case to understand. And so I think the other way to look at it those unlocking value and so there's a lot of applications where there's a lot of like like latent capacity out there, like home computers with like community devices that people want to support you with. So people even have three big gaming rates that they're not using at their house, right? Like they might have, you know, they upgraded recently, and they've got this other machine sitting there. The electrical cost of turning that on isn't super high. So there's a lot of things where you can crowdsource devices. The area that we've been doing is with a lot lately is AI code generation. So we've built something called Prometheus very recently, which basically uses AI on lots of computers to write software and push it to get help. That means that community of people can pull their devices and write a whole app like from scratch, which is pretty amazing.
Jon Radoff: And we were talking about this a little bit before the show started. It's Prometheus is something we can take a look at live, right? Yeah, I'll put up. I love the risk taking with the live display here.
Al Morris: It's always worth a shot. I think I think live demos are an important part of this. If your live demo doesn't work, then you know, you've got to refactor something. They were back in the street. Cool. So this is Prometheus. We went for a matrix kind of vibe. Actually, one of the cool things about this is this whole app was five coded. So we have a really amazing engineering team, obviously, but something of this complexity usually takes like a month or two really hard work. And I think we got it done about two weeks. So our team with five coding agents with them were able to do much more than they would have in a short time. The whole idea behind this though is you've got this this major motif of red pill or blue pill. Yeah, like, you know, do you want to kind of earn in chill like be part of the ecosystem and support it, but like you don't have to make like the the aberrant decision. Do you want to really take a leap of faith and try to build something fit? So you go and put the red pill button what you end up with is this very simple interface where you paste in and get a get up a repo link. And then you can set up bounty on it. So you connect your wallet and you're off the races. Basically what's going to happen behind the scenes here is an Ethereum transaction goes through into a bounty pool. Then a bunch of agents will complete that bounty by pushing code to get up. And so it's been working quite well at this point. We can do red hat analysis really well. We can also do like documentation and summarizing different repos. And then one of the things we figured out recently, which I think is amazing is we're getting the agents to basically write documentation in agent language, which like agents don't actually read natural language. They have to think really hard to read natural language. So if you send like a huge document to chat to be tea, the first thing it does is the translates it into a bunch of vectors which are basically math. So you can see that the bounty's been created here. So what's going to first happen on this is we're actually going to go into this repo and a co agent is going to write a vector store in this repo that is basically like a new version of the documentation that's just for agents. And so then any agent that comes to that repo after that can read this. It's like basically AI compatible version of the docs. And then the agents all get to work on it. And then actually they come back later and they'll write a human readable version of the docs once they've decided what they want to do. And it's pretty neat. It's almost like really entering the majors.
Jon Radoff: I remember there was this funny example just a few weeks ago where there was some voice chat session between I can't even remember was chat GPT or something else, but it was some kind of it was two agents and it was like Gabby talk or Gabby, but like they were, yeah. Yeah, they were talking English and they started talking in a more efficient.
Al Morris: Oh, yes, I saw that one as well. For agent agent phone calls, I think, Fred. Yeah, that was pretty interesting. And they start just talking in like what sounds like the transformers in transformer movies. Yeah, very aggressive.
Jon Radoff: So you see the description of your problem in this more machine readable format, essentially, I guess the vectorized version of the project definition. And then the really cool thing about this then is if we think of like project bounties that have existed in the past, those were always organized around like a human coming in and trying to earn the bounty in this example, it's the AI agent running on the network that can earn the bounty.
Al Morris: Yeah, and then they end up looking like rock stars, which is crazy. So this guy, Booc is from our community. He's been an ambassador for a long time. So he runs the node pretty religiously. And you can see his note is actually making contributions like five or six days a week. He's now reaching like rock star get have developer status kind of thing.
Jon Radoff: What differentiates the kind of AI agents that people deploy on the network? Like what makes someone like this a rock star and in that he has been able to structure his agent such a way that it then succeeds at a lot of the kinds of problems that come into the network.
Al Morris: It's it's mostly about provisioning the right resources. So get to pick which key you bring. So we support currently clawed open AI or chat to be T. And so that's why we're getting rocked recently so that people can use a Twitter account to add an engine. And then we are working on deep sea as well. And so depending on which agent you pick, then you have a higher likelihood of getting accepted. You can still make money with something like rock, but it's like not quite as good a code generation. So we like, you know, we usually recommend that rock is used for like our social media bots, which are a different kind of direction on this. So if you have a quad key and it's got lots of API credits and then you have a fairly big CPU on your machine. What ends up happening is we can offload a lot of the stuff that would have had to go to one of these big LLM APIs and we can run that all locally. And so managing those embedding databases and all the vector stores and then actually doing a pattern, pattern recognition and pattern matching locally can actually offload a lot of that. And so you end up with like really high quality results. The other thing that happens under the hood here is we're actually checking and running all the code. And so the agents are actually not only writing the software, but testing it iteratively trying to break it. And so you get boundaries as well if you audit somebody, I was just kind of back to that fact that same audit float.
Al Morris: So you're actually going to reward a few audit people here as well.
Jon Radoff: So this is kind of mind blowing. I'm trying to sort of project out a few years and just try to think even what this means for society because you basically have agents that are now incentivized to win at problem solving. And they're incentivized because they're earning rewards. They're earning tokens for success. Now, the idea here is there's a human who's organizing these agents. So the reward is going to that to that human, but it seems like we're just one step away from the agents essentially earning token on their own.
Al Morris: Almost I think there's still going to be a human in the loop. What we're actually most excited about is kind of a middle ground between those two features. So I think the world that we're hoping for is one where you don't automate all the work. You actually automate all the boring work. And then a human was feedback on the interesting this. I think with that in mind what you can expect is it's not so much about humans being replaced by robots, but humans being able to do so much more with robots and then having like this gigantic community of agents that supports each human. And that that is kind of the that's where the cost economics get really complicated because there's already a bit of a shortage of GPUs. And I'm hearing that the latest GPUs that are coming out are actually not as good as they said they were going to be. So we're seeing this massive increase in the need for compute, but then at the same time we're like, we really need to make sure that we are using the existing compute devices we have if possible. Or there's going to be like a massive shortage market and we'll see most of it go towards most of that in like new productivity will go towards people who have the resources to buy all of that supply. Which is kind of what's happened with the video.
Jon Radoff: The other thing that strikes me though is since your have since what you're doing is you're setting up agents that are kind of organized around the resources and they choose things like the particular LLM API to use for problem. You're effectively converging around the optimal cost not only the best performance on an LM basis, but really the best cost performance on an LM basis for solving a given problem. Because from a pure compute standpoint, it isn't really just about which LM is better. It's what LM can solve the problem for me economically.
Al Morris: Right. Right. Well, especially if your if your budget constrain you're like if you're in an enterprise environment, you also want to know that it's solving it for you economically and correctly. Which is that's a little bit of a tricky thing, right? Or if you kind of it's like kind of a you know, if you hire the wrong people into your company, so you're trying to cut costs. You can have 50 interns, but it's not going to help you solve the problem. And I think that's actually the big problem with LLM right now is we're seeing like more hallucinations and value in many cases, especially if you like if you look for your ex right now, most of the threads seem like they're written by like a five year old monkey with a typewriter. And it's very like that's kind of where we're at with chat to be to you right now is it's like it's really good at writing, but it's not that good at thinking. And so it writes extremely well and it's very compelling and it seems like it's on the right track and you read it and if you don't understand the topic very well and the person who's posted it looks like they have authority. And you're kind of forced to believe that this is true. But in reality, what's happening is like, you know, the more busy somebody is, the more likely they're going to use chat to be to write all their posts and then you know, but this gobbledygook that makes sense to nobody. But the person who's posted it has authority enough that other people believe it and then they start repairing it and sharing it. We've even seen this recently there's a whole bunch of newscasters on the CNN that got hit with this bit of a scandal because they actually had all cited the same Reddit post. And the Reddit post was somebody growth hacking with chat to be key. And so this is like, is this flywheel effect of like progressively more stupid conversation on the internet, which is a little terrifying actually.
Jon Radoff: So you were talking about using API keys is an option to actually run your own models though. Yeah, decentralized itself. So could I set up an agent on your network where I think, hey, this model is really good for solving certain kinds of problems and therefore. Like I train my own version of llama, but for, I don't know, I'm just making shit up here, but, but for solving legal questions or medical questions optimized enough that it can still run on a local device.
Al Morris: Yeah, just ironically, we've done that for Twitter bots, ironically, for the previous discussion, but it works pretty well for supporting commentary on things. So if you're an agent draft something and then you can decide to post it or not. And I think those kinds of applications work really well and there's some like easy low hanging fruit there. I think the wider problem with training at a distributed level is training is pretty non deterministic.
Jon Radoff: Yes.
Al Morris: And so determinism comes back and it's really important for being able to verify what's what's been done by the agent. And so what you can do with the smaller tasks that you can check the work is done correctly. It's very easy to hand those out and get them all done on the edge or get them all done in a peer-to-peer way where everybody can find consensus. But there's no one to check it that we're properly then the consensus is really hard. And also the other side of it is the longer that it takes to verify that something is done correctly than the harder it is. And so if you imagine training a, let's say like training a lawyer like you said, you don't really know if a lawyer is right for a while. You know, it can be quite hard to tell if your lawyer is right or an accountant like it's very hard to tell if an accountant is right. They file your taxes and like two years later you might get an email from the IRS. And so you kind of want to in a lot of those cases, you want to let experts run those APIs. And they can train that they can actually get really good feedback on it. They can make sure the quality is in line. And then what you can do with it though is you can layer on these agents that actually do implementation services. So kind of what we're trying to get to with Prometheus is this idea that you can create magic with code, which most people don't really have access to software. Most people are really like when it comes to using computers, even me as like a robotic engineer originally. It's just a lot of activation energy, right? So like I remember we had this amazing DevOps guy. He's worked on dozens of different types of automation. He used to automate factories and electricians like training everything right. And he's like I would say a brilliant engineer, but he would still spend his whole weekend playing with his flex TV to get it to start working on a slightly higher resolution or integrate his new Bluetooth speaker. And that's kind of I think that's kind of where we are in software right now is there's a lot of those little integration problems that can take you down a rabbit hole for three or four days. And I think that's kind of the stuff where we can see AI really helping the short term and then. You know like solving the world's problems will probably come a little bit later, but I think there's a flywheel effect where. Subsidizing fixing all kinds of small things is enough to get everybody to pay for the model so that we can train it up to level where it actually can do my taxes without being a huge problem. At least we like to think so. So sort of the applications that we're thinking of for Prometheus though is a little bit more magic for average folks that are not trying to build compute. And so if you're software engineer, we can do a lot of really cool stuff like automatically check your get up commits. We can automatically document everything that you push to get. So it makes this weekend projects a little bit easier to get out there, but I think for average person what they're really looking for is like, you know, I'm a hairdresser and I would like a void line that acts like my secretary that automatically schedules on my meetings. And I would like it to be like growth hacking for me on Reddit when I'm not a my computer. And I think those are like actually a lot more powerful if you consider it. And there's probably a lot of those applications and another one we've been looking at recently is logistics and supply chain stuff. So like, you know, call all of the trucking companies within these five states and then ask them using a void line with an agent if they're interested in getting more referrals. And do this all very politely with the money mailing list and then send a bunch of ads out for our new trucking company that we started. Very full request to these different logistics companies. There's a lot of ways to package things and build up like kind of additional revenue streams that way where you can build really sophisticated businesses with pretty dumb agents. And I think that's probably where we're going to see the most growth in the short term. And then the kind of holy grail of this is along the way we're going to develop a bunch of protocols that agencies to work with each other. And as those protocols get better, swarm intelligence gets bigger and get this. This effect where the hole is greater than some of the parts, which I think is going to be a really powerful effect. That's kind of what we've been focusing on with the code generation where we're sort of standardizing all the different components. And so now when when our agents are working on a repo, they progressively get smarter over time. And so you go to the repository and the agents start indexing in like the first couple times that they look at it. They're starting to figure out how the pieces fit together. They're doing some pattern recognition that kind of thing. And then as they look at it for longer, they get progressively smarter until they can solve problems that a human would have missed. And that's I think that kind of pattern recognition stuff is actually going to be really helpful because it saves human brain cycle time, which hopefully means we get to go touch grass or play video.
Jon Radoff: All right, well, I'm going to I'm going to tell you what I think would be a killer app for an AI agent. I want you to tell me how I can build this on Prometheus because this might be my project. But what before I mentioned that I also just want to say, hey, we've got 400 people here watching the live stream. We're on LinkedIn, Facebook, YouTube, Twitch, X, we're all over the place. You can be asking questions like Danny is asking me where are my glasses today? Well, I only wear my XR glasses. For fun, I don't wear them all the time, but you can ask a question that's actually about AI agents and distributed compute and poi and all the stuff that we're talking about today. Please join in in that conversation because this is why we do it live. We used to do it. There we go. So Danny, I'll take care of you for you. Yeah, I used to do these as YouTube videos. And the problem with that is it was great. We could do a lot of post production on a YouTube video and do nice edits and stuff. I do everything live now so that you the community become part of the conversation. So that's everything from if you want a post that you'd love to join live, just post that, you know, reply Oscars watching all the channels. He'll send you a link to stream yard. We can bring you right in. If you're not ready to be on camera, that's totally cool. Just post your message and text and we bring every question we can on the air and up for discussion with without today here. And it can be about poi. It can be about decentralized compute. I was been around the block. See probably ask about just just about anything that touches on any of these topics. All right. So I'm just so tired of all the messaging systems across all social media and mess and you know everything from discord to Facebook messages to telegram to you name it. And it just feels like this needs like a super app that manages them and manages them all and like surfaces them to me and let's me know it's really important. And let's me know when two people are kind of the same people. And I know that in theory you could do it in software, but these guys don't like don't let you go and interact with their protocols at a native level. So it's sort of requires like AI operating at the user interface level to just go and run the software for you observe the inputs and channel it upwards into some super UI. And it just feels like the killer app my killer app anyway for some kind of AI agent to go around and look at everything organize it. And then I operate at the highest level and reply and it goes and it figures out how to get that message back to the person in their preferred channel. Because I don't I'm probably in 10 different messaging platforms at this point and it's insane. It looks like.
Al Morris: Not that one first. Yeah, so I love automation tools. One of the things that I did when I first got cursor which in about December I got really, really into using cursor to write apps and build code. I've been using it for about like four or five months before that. So this is a donor cursor is a decentralized. Well, it's not really decentralized. It's a very open source environment. Let me see if I have one up here. Yeah, this is cursor.
Al Morris: We can pull this up.
Al Morris: This is cursor and the difference with cursor compared to a normal code base or no normal code editors on the right hand side of cursor you have an agent here. So you can ask it to do something big like refactor the whole refactor the whole code base. And it'll just start running an auto and it's going to start rebacker the whole code base. Now this isn't a great idea except I've already checked this into Git. So it's not going to delete anything and I'll have all the stuff that I had before. You can let it run an auto pilot for ages to figure things out. So one of the first things I figured out with this though is I wanted it to do some click automation on my computer. And so I had it working on like identifying buttons on my computer and then clicking those buttons whenever they popped up. And one of the buttons is this except changes button on cursor itself. So the first thing I did with cursor was jailbreak cursor so that I could get it to run autonomously. I'm getting it to do those kind of click actions, getting it to ingest the data and read it off the screen. It's actually pretty easy to do. A lot of it can happen at the edge. So one of the things we do with coins as well as we have a bunch of accounts. A bunch of different tasks I should say that have certain accounts attached to them. So this one's an astralink which is a marketing bot marketing network. I should say it's kind of like a conscientious botnet. So we don't just post anything. You have to actually create a contract with the community and put up your token is a bounty and then it's kind of like more like a growth hacking tool. But it has a username and password for Twitter. And what happens in the background here is it actually opens a browser window that logs into Twitter. And then it reads the interesting stuff from your newsfeed and then it uses that to identify opportunities to show this product for your community. And then it basically is actually it's clicking buttons and typing on Twitter in order for this to happen, which is pretty awesome. We are able to we're able to implement a lot of this with vibe coding and then we've been able to fine tune it with some human feedbacks and said, but you can do a lot of these types of applications that pretty much anything that happens in a web browser is really easy to automate. Anything that doesn't happen in a web browser can be a little bit more complicated. I usually actually have to window open under computer.
Al Morris: The vision I'm trying to get to though actually is I would like to be able to use my phone all the time and not actually have to go to my computer right.
Al Morris: Because really what you can do is you can leave your computer open with like five monitors, all the different windows that you want to open or even just virtualize the monitors and not even be able to see them. And then have your agent just put them through and reading what's going on in your computer and basically act as a secretary for you.
Al Morris: So I think we're moving towards computers as secretaries as opposed to computers as tools, which is pretty cool.
Al Morris: I'd really like to be on the skillet right now or like surfing or something. I can get a little agent in my ear that asked me questions over and over again and then I can just kind of walk through life like that. That'd be pretty cool. The first thing that got me on it is actually was in Ironman. This is great scene where Jarvis asks commenced automated assembly. I was like for all those times, like big lab set up and everything and he's like he's basically talking to this agent and the agent's like, OK, well, maybe we should add some guns underneath the wrist plate and we should maybe we should use titanium. Maybe we'll paint it red. You know, it's like, yeah, of course all this sounds good. Everybody's like Tony Stark's brilliant like in reality Tony Stark just like he inherited an AI agent from the staff. Right. It's pretty good. You know, like good for him, but ultimately the whole Ironman plot is he's got a really, really strong AI that helped him build a superhero suit. It's pretty cool. But I think we're getting very close to that. I think we can approximate that for people with have by having a computer that has much of agents on it.
Jon Radoff: So we had a question from beable of all and in my head I'm like, who's running the beamable account with this question about abstraction today. That's cool that whoever's running the beamable account. Why to ask a technical question? There's about abstraction and how people don't really even understand kernel programming. How are AI agents going to change abstraction. I guess one thought I have is like vibe coding is sort of the ultimate abstraction that we've got right now because you don't actually have to know too much about programming because you're automating programming tasks through language. What are your thoughts either on vibe coding or this topic of abstraction specifically.
Al Morris: Yes, there's two two big things that are happening right now. I think the first thing is AI is the new user interface. So for a while, the user interface was keyboards and mice or mouse's I guess I don't know if you use a plural. It's a mouse on your keyboard, but we had a screen a keyboard and a mouse that was where we started. And then the beauty of Apple was that they invented the touch screen or they didn't really invent it, but they really pushed for it. They made it quite popular. They got rid of all the buttons on the phone and they killed Blackberry, which I'm a Canadian. So we were kind of sad when Blackberry died, but you know, the end of the world. After the touch screen, the next thing that happened was the audio interface. So things like Siri, things like being able to say, hey Google and your Google home and start talking to your yellow Alexa from across the room and order groceries, all that stuff.
Jon Radoff: I was like, oh, I fall over the place by the way with that.
Al Morris: Yeah, and like most of those don't work very well. They're pretty annoying actually and it's kind of weird that they're listening to you all the time. So there's a bunch of those that have like kind of not worked that well. What we've been doing more recently in terms of abstraction is we actually bring around these microphones. So I've got this DJI kit. It's like a little microphone set up. That's a great, that's a great little mic. So you take these around with you and they've got these little fuzzy things you put on them. So I actually like I go to events. I was at an East global meetup last night and I took out the mics and I sat there with a buddy and we we prototype a project. And so it's about a half an hour recording of us talking through all the specific details of this project that we're working on. And then that gets processed by a note taker app that goes into a chat to you work flow. That chat to be to work flow creates a GitHub repo post the read me into the GitHub repo. And then he and I will both sign off on that and then the agents will start. And so that's like that's what this can look like, which is pretty cool. Like conversation to app is like a consistent abstracted flow that works really well. The real problem with that right now is if you just let the agents five code like if we go back and we check that code based refactor that we started a second ago. We're going to see there is probably a bunch of stuff that is either duplicated or hallucinated or half finished, which is actually what a agents do a lot of the time is that they'll start writing something they'll forget what they were doing with it. And then they they'll basically come back to it and delete it. Which is really frustrating to watch actually because you're like you're sitting there and your code is looking really good and then suddenly it's deleted. And you're like, Oh, that's not good or it's. It's really good and then it changes dramatically. Like I had one we asked we asked to refactor a front end of a site. And we asked it to change the text on one button and didn't just change the text on that button and actually tried to redesign the whole UX flow around that button. And then it changed the text like slightly and then deleted the button. And that was a pretty annoying workflow and you kind of run into these things with what three with decentralized stuff as well. Like most of the smart contracts that AI writes are a little bit buggy and have some errors in them. So the trick with this is you actually have to go through these multi agent flows where you actually get the agents to check each other's work. And you also get them to really critically analyze like did the other agent do the correct thing or did they do more than the correct thing. So in our case, we try to treat this as an audit mechanism where not only do we audit the node if they do something that's subpar. We also audit them if they do too much. So over time we're getting this training set that we can get closer to. But it works really well if you have like small modular applications where you can have like one unit tests that's going to be verifying that one thing was written correctly and then you give your whole scope of work has to be structured in all these little modular things. And so we found that some pre processing with like oh four leads to something that's like a really clean spec that has good acceptance criteria. And then once you've got that, then you can hand it off to a bunch of swarm notes and they do a really good job. But the question is about abstraction and I think probably what I would say at this point is I would not give an agent C++ code. It's probably a bad idea, but I think giving it some JavaScript hacking is not too bad. But it's all about having guard rails in place so that it doesn't so it doesn't go too far off track. And I think having like really strong rules and environments like that helps a lot. And then the other thing is with the encoding that we talked about before, if you have really good encoding, then you can avoid. You can avoid a lot of that hallucination because you can encode your request as well. And if you encode the request and you encode the data set that the node is working from. What you end up doing is you limit the context a lot. And so as opposed to the agent forgetting what it was doing with that code base, that would actually knows what's doing with the code base to solve a very compact context. So the more that you can compress that context window and do like, you know, agent language encoding, then things work a lot better.
Jon Radoff: I want to see how these agents would do if we just unleash them on these lead coding problem sites. It probably pretty well. They probably also well documented how to solve all these problems that they just crush it right to the top of the leaderboard.
Al Morris: I did that actually. You did that. I really formally do it on a lead coding site. So on our GitHub, GitHub.com slash point out word. I think it's here somewhere from me.
Al Morris: He is testing.
Al Morris: I know it was on our website.
Al Morris: Let me find that for you. I think it's on this section.
Al Morris: I think it's on the top of the page.
Al Morris: See live GitHub.
Al Morris: No, that's not the right one.
Al Morris: We have this repo that has like.
Al Morris: Something like 12,000 full requests on it now. And it's agents solving by like lead coding problems, which it does quite well.
Al Morris: This one.
Al Morris: Oh, here it is. Almost 12,000 full requests. And we got it to do all of these small modular tests. And it's incredibly. Watch and each one of these has like a really like really concise working commit. A very functional working code and they're all unit tested. And they always work by the tests. And so in order for this to get accepted by another agent, they actually have to run the tests and verify that the test worked. Which is kind of the, that's what we would say the acceptance or auto criteria is. If you break them down into these smaller challenges, they're just outrageously good at them. So actually that's kind of what we try to do with our pre processing is we try to break down what the agent is going to work on into these modular tests. And then we just pull it from the existing database of solutions. And we don't even really need to think about solving the problem, which also cuts down a lot under inference time and usually reduces costs.
Jon Radoff: So I'm going to ask a question out out of some ignorance, but it's around Kaggle is the other thing that. Remind me of that now I've been I haven't really missed around with Kaggle. And I don't know two or three years or something. So Kaggle for people who aren't familiar with it is this site where there's like lots of machine learning problems posted and you can compete with other teams who are trying to solve a machine learning problem. Seems like really obvious that people will just start to unleash automated AI agents on these problems to try to see if they can just win these competitions. Because just like you give out a token for winning like there's dollars given out on Kaggle for winning an award there in fact it seems like people could just start almost wrapping Kaggle competitions in a problem that they submit through Prometheus based on what you're describing.
Al Morris: So that's almost exactly what we're trying to do and it's a good one that even actually pretty much every big crypto ecosystem has a bug bounty pool. Usually for things like you know can you implement this library as an integration. But usually pretty straightforward. So anything that's integration related like all the agent has to do is read the docs and then read your docs and then try to get them to work together. So that's the area where we see the biggest growth factor for this is I used to be a systems integrator so we're to factories and I did automation tech. And a lot of my job at that point was I would go read a manual for this company and a manual for this company. And then I would go into our lab where we had both the products and I'd sit there and connect all the pieces and make sure it worked. And sometimes it's been a week doing that and it was painstaking. What I do now is I typically will pull all the different repos into my computer and then I'll have the agent read both of them. And then I'm mostly my job is to manage the context and make sure the agents are being efficient with their time, which is more like being a project manager than being an engineer. And that's it's incredibly satisfying when it works actually. It's very fun. On Saturday I built a V1 of our like new encodings library. And I was able to test about a dozen different embeddings libraries from hugging face in I think a matter of four hours. And I set up the test and then ran the test and then got a good result which we've now implemented as an improvement to our product. And I think in about four hours I did probably two weeks before. So that kind of stuff really really helps.
Jon Radoff: Amazing. We had a question from the audience and I think the answer is yes, but maybe you should just tell people what they need to do. So. Yeah, I'll walk through it. Tell people what's your contract. Check it out.
Al Morris: Yeah, so if you just type in Koi. Koi into Google. First link should be quite network. And if you go to quite network. You'll get this page which is about Prometheus. We can also earn Koi this way and if you scroll down, there's a quick. And a very loud video. So feel free to meet the video. I've been arguing with our UX team about whether it should be needed or not, but. This link here just downloads the app. It installs in a matter of minutes and then you get some free tokens and then as soon as you're up and running, then you can support all these different projects play games run agents all that. So we also have a few partners that are now white labeling the node as well. So it's open source. And so if you need a browser for your project to run these kinds of like community oriented tasks, you can control it. So it's like only your token, only your community can run it. That kind of thing. We're also looking at NFT gating, which is another interesting angle.
Jon Radoff: So one of the things we were also talking about before we went live was just the distinction between decentralized and distributed compute. And I wanted you to spend a little bit more time on that and also just then with an understand a bitter understanding here around what decentralization really means. How is decentralized tech going to democratize access to solving these kind of problems with agents more so than what we would otherwise have access to on the internet already.
Al Morris: Yeah, so a few layers to this problem in like the late 90s point got going with steady at home, holding at home and a lot of that stuff. All of those projects, those are like the first what we call decentralized computing. But actually those are called edge computing and the nomenclature is kind of important here in edge computing environment. All of the nodes are pushing code back to the center or they're pushing some work that they've done back to centers. So they pull some information, they do some compute over it, then they push the result back. Most of those systems did not have consensus in a global way. So the center node here is just deciding, you know, this guy doesn't seem like he did the work. I'm going to kick it out. And that's not really, that's not actually a decentralized system. It is exciting that they're using all this edge compute devices. And it is cool to reduce costs for scientific applications and things like that. I think steady at home, I was mapping the stars and looking for UFOs. I don't know if they ever found any UFOs. We might want to bring that up and check actually after this. The next layer of this was what we call like the Web 3 space which started around 2015. This could also be seen a little bit earlier than that. Bitcoin arguably is kind of this sort of a model mostly, but in a Bitcoin system, you have all these different nodes. And then you have end users pushing transactions to those nodes. And then they all broadcast through and you end up with this one point of consensus, which is wherever the current masterclass. The current masterchains. Most of these systems are kind of progressing towards this full peer to peer grid. The big distinction here is in a peer to peer grid, every node has more than two connections. And so they always have at least one fault tolerant connection and all of the nodes are sharing data directly between each other. So you can see the lines are all connected between them. Now this would be interesting if I had come up with this, but actually the interesting thing is this was actually from a paper by the Rand Society about ARPA net in 1964. And so we've been on this track for a long time and pretty much all of these really bright computer science people way back actually all predicted this kind of like Isaac Asimov predicting all kinds of sci-fi stuff. But what we're seeing now is kind of the transition between Web 3 and true Peter here. And the main distinction here is how many devices can you onboard and can you get the devices to actually talk to each other directly. And so if you look at a lot of deep ends right now, what actually happens is you have a smart contract on a blockchain, which acts as a focal hub. And then a lot of nodes doing work to satisfy what's going on in that smart contract. And what we've tried to implement with COI is a set of standards that allows you to have nodes actually talk directly to each other. And so when you hire a swarm of Prometheus agents or any of our bots that we use for other products, all of those nodes actually talk directly to each other. And so when you're getting a result, you're not getting a result back to you that you then have to do something with the nodes are actually solving the problem for you by conferring with each other and then coming to a solution. And that's I think really powerful from an economics perspective as well, because what it means is you have a community of people that actually really have 100% buy it that they own the system. And if you if you can see that as like an opportunity, then you can create a lot of systems that benefit the community. And if you do that correctly, I think it grows very, very quickly. I think what a really good example is this would be helium. And so you're buying a helium node really means that you are part of the network now. And I think just yesterday they announced that they have now partnership with AT&T, which is pretty incredible. So you've given AT&T phone on the back end, it's actually using distributed hosting and distributed bandwidth, which is pretty, pretty amazing, I would say.
Jon Radoff: Yeah, and I think what helium is doing well is there also while they're using web three technology as as the orchestration layer or the settlement layer for payments and things like that. The user of the technology is not really a blockchain user. So before AT&T they also T mobile so T mobile, I think that's a year plus ago that they added T mobile support. So they've been building these bridges into what you think of as a traditional telecom ecosystem, which I think is interesting. And it was sort of a way we approach things at beamable as well, which is we knew that we wanted to use a web three architecture essentially for running the network. We didn't want game studios where the users of the computer on the network, and especially the players of those games to have to know anything about blockchain because in our view, it just shouldn't be required. Like there's all kinds of games.
Al Morris: So it's the word.
Jon Radoff: Yeah, and only a small percentage of games or web three games anyway. So, and even at the studio level, like people just don't want to think about blockchain. So we abstract away a lot of the blockchain or all the blockchain stuff as far as the studios concerned. And I'm curious how you think about that as well because a lot of your structure is around this token economy for the agents and the problem solving. Have you thought about sort of the relationship of that kind of economic structure with onboarding like massive amounts of users and compute and.
Al Morris: So, you know, we're going to be a few layers to the flywheel with these things. I think yeah, we like to say that like it's very similar to Uber or B&B, whereas you get more providers than the cost goes down, which attracts more supply of more demand. And then that demand coming in actually attracts more supply. We've kind of gone through this cycle a couple of times at quite and what we're working on now is actually onboarding full communities as a whole because we find then what we actually get is we get demand and supply. And if we can get a community that already has like a significant amount of need for compute, then they also can actually come and provide that compute as well. We just end up being a protocol that connects the two pieces that I think gives people a little bit of a cleaner experience. I think like you're saying, I shouldn't know that there's any web three involved. It's just I'm using an app. And if I have this other app on my computer, then I can now provide for that network and I get to use the app for free or I get to use the app at a much lower cost or I get some kind of special badge like something like that. And I think we're just mostly at this point where we want to incentivize people to use these systems, but we don't want to make it too difficult. If you have to buy a token and stake the token, there's all these extra steps. It gets very complicated.
Jon Radoff: Exactly. So what's next for you guys?
Al Morris: One last slide here, which is. Garila versus Gorilla. Where we've been going in this. I gave this talk a few weeks ago at the Internet archive. I called it, Garila Internet tactics for peer to peer revolution. Well, idea here is four days and that kind of a vibe has really been the degen mentality for the last couple of years. And I think that the way the world is going right now, whether you really care about the political landscape, the artificial intelligence technology field is accelerating so quickly that we're going to have a moment where we have a possibility of monopolies again. I think this is probably the highest, highest concentration that we've ever seen. They say the economy goes through a phase of bundling and unbundling. So bundling means, you know, there's a bunch of buyouts, everything ends up as part of the meta stack. You use the same login for your WhatsApp, your Instagram, and your Facebook. That's bundling. Unbundling is usually what happens as the market kind of accelerates and a bunch of those companies are forced to split off by antitrust and things like that. Or what we see is usually that a new technology comes out and it creates a lot of competitors to existing paradigms. The scary thing right now is we're seeing both of those things at the same time. So we're seeing bundling happening where all of the, all the companies that used to use AI are now switching over using chat to BT or another AI provider. And so everybody's standardizing to these common rails, but those common rails run on a private API. And so that means that as they go and disrupt industries, like as we build up all of these AI enabled businesses, we're actually sending all of that revenue stream back up to these big companies. And that's why chat to BT was able to raise, I think, $400 billion in their latest funding run, which is an obscene amount of money. I mean, even considering that as crazy, I think that's more than Canada spends per year, which is nuts. So I think if you think about it that way and you really appreciate the depth of this problem, we're looking for kind of a very peer to peer revolution oriented group of people who are going to take on being the Boston Tea Party moment of this industry. And I think we need to get to this point where we really embrace the fact that we want the free market to prosper and that we want this technology to exist enough to make the sacrifices for it to survive. And so last week we threw a tea bag into the San Francisco harbor.
Al Morris: Nice.
Jon Radoff: So in this whole bundling, unbundling centralized decentralized view of the world. When you're thinking about these monopolies, assuming you're thinking about companies like open AI with things like chat, GPT, you can concentrate so much of the AI power within one API essentially that you then need to use. The interesting observation though is like this market ended up being a lot more competitive a lot faster than I thought. Like it isn't actually just open AI. It's open AI and Claude and Grock and a whole bunch of people are out there doing this. And I remember the narrative a couple years ago was the real value was going to be in the LLM and everybody else was just like wrapping in LLM in their application. It isn't my observation is it isn't as simple as that actually like a lot of the time the LLM is the is the commodity component. And you swap out the LLM as it gets better. And actually that's really great for people who are building applications who utilize LLM's because you can very seamlessly just upgrade even between completely different companies supplying the LLM logic into your application. Just kind of as you are there and your agents because your agents are organized around different API keys for the for the different LLMs and people could choose one versus another to get at that optimization of right answer for the right price kind of thing. What's just your thoughts on sort of like where the value is being created in this stack and if we think about this monopolization problem. Where do you think that monopolization is happening is there going to be an LLM that's eventually so run away much more powerful that it's just the only one people want to use or is it going to continue to be competitive so kind of that was a big generative question actually.
Al Morris: So it's two things that happened I think deep seek is a really good example of what can happen in this industry so deep seek was distilled so what distilled means is someone asked deep seek enough questions are they asked sorry someone at deep seek asked chat to be T enough questions that they were able to simulate chat to be T based on all the answers to those questions. And actually interestingly what happened is deep seek is also more efficient than chat to be T because it got to learn from a master and you can kind of think of this like I'm the Luke Skywalker learns from Obi-Wan Kenobi right so Luke's going to be better at stuff because he's younger and more fit and he learned all of what Obi-Wan knew and Obi-Wan was better at stuff because he learned it all from Yoda right and so Yoda is this old guy who knows everything but is you know a little tired right which is reasonable because he trained all of these people. But as you go down the stack eventually you do get something much more powerful I think what we're seeing though is we're seeing a lot of hypocrisy at the very large LLM so especially like open AI is simultaneously saying you can't distill us that's evil you should go to jail let's declare war on China right they're saying that and then they're also at the same time saying you can't stop us from crawling the internet and stealing everyone's intellectual property. And I think the fact that they can say both of those things in the same sense is just crazy and I think that's kind of what we're up against and that's what we're more with. So what we've been trying to do by creating this like multi-Asian marketplace where we've got a lot of different agents competing the self-cutting problems. What we're actually building is the dot KNO database which is an open standard that we've created. It's dot KNO for dot knowledge just a little bit shorter. The dot knowledge standard those designed to provide a common ground for these LLM's actually talk to each other and it creates that common vector store. So when you look at one of these repos that we've worked on there's actually this common thing that all of the different LLMs are contributing to and it's not we're not distilling the information we're not trying to copy the LLMs or steal our tech though I guess we could realistically I imagine someone else might do that. But what we're trying to do is to create a common ground so that when you build a product you're actually building a product that can be used by all these different products so then it does commoditize the back and infrastructure. And then you can actually switch between them, which I think if we don't do that and quickly we're going to see a world where all of the training data goes back to a couple of large LLMs and there is a tipping point there where if if Chatsby T does get all the training data and they can successfully prevent anybody else from distilling them. Then they will have the only copy of what we call the embedding of reality, which is kind of like they have that vector database that actually maps all the human concepts together and they would probably have the only one, which is why they got all this fun.
Jon Radoff: All right, well it's going to be an interesting next couple of years here while people do get out over this stuff. I want to give you the last word here, Al, this has been absolutely fascinating conversation we've covered AI agents, decentralization agents competing with each other for token rewards to solve problems. I really see this changing the way a lot of industries are going to run in the coming years and that it'll be humans side by side at least in this era we're not quite replacing ourselves with the AI yet, but AI working side by side with humans to solve really tricky problems and super accelerate our productivity like your example you gave of like you did two weeks of work in two hours like we're going to see more and more of that and are the value of our time is just going to increase. And so we're going to see these exponentially hopefully then we get to go and do whatever we want with the rest of our time, but kind of close things out here like what do you what thoughts want to leave people with as they think about Koi AI agents the future.
Al Morris: So I have a hopeful example of a funeral homes.
Al Morris: This fellow that I met recently at a conference we become friends, he runs a company where they do marketing and they coordinate finding a funeral home, which is a difficult time with people's lives funeral homes of traditionally not done marketing and they're also traditionally and commonly run by much older people.
Al Morris: So typically not super web savvy they don't really understand like Google ads or any of that kind of stuff and they don't have a good way of targeting their consumers at the same time there's a huge amount of data out there that allows you identify people who need a funeral home. It's actually very straightforward to find them through like ads monitoring and things like that and so at this difficult time with people's lives they need a funeral home they can be found and they can be identified and given very easy contact and they can that whole process can be streamlined considerably. So this guy's been able to spin up a business where he gets referral kickbacks from funeral homes for sending them business by connecting them with customers that actually need their service. And this whole process he was able to set up completely with vibe coding and it's now a business that serves I think over 500 funeral homes around the world. So he's able to be there for people in a moment when they actually really need the help and when they don't want to think about it too much they don't want to be like calling 20 different phone numbers and he solves that whole problem for them. He vibe could at this entire thing set the whole thing up on an agent flow and I think it takes him about two hours a day to run the business. And those are the kind of opportunities that are out there where you can actually streamline society in a considerable way it doesn't take being a PhD tech guy it doesn't take any of that stuff it's just about coordinating resources. And then the final hopeful note that I have is I think there's hundreds of millions of opportunities like this around the world and I think we can crowdsource the solutions for them, which means that all you kind of need to do is be part of the deep end. And then you can actually help all these things. I'm a little bit of a product sales pitch but mostly I'm just very hopeful about this stuff which is why I spent like the last eight years working on this. We're really I think at Koi were very motivated by the impact that these kind of technologies have from a pretty small town in Canada and growing up. I watched probably all the smartest people that I knew leave to go like live in Silicon Valley or live in New York or one of these places. And I think that this is going to spur a transition of global to local where the local relationships and the local context is actually going to drive how we use large language models in AI. There's a big potential that it can help pretty much everybody, which is really inspiring. So hopefully some people inspired by this if you are we're on Twitter on Koi Foundation or you can get us on koai.network. And we also have Prometheus Swarm.ai which is available. It's Prometheus like pro me and the us which is our goal. So we want things to be pro you but also that lead to the us that we all get to live in together.
Jon Radoff: Alright, thanks. Al that was absolutely fascinating go check out Koi network. And just if you're coming up to speed on deep in generally this is just another example of what a decentralized physical infrastructure network that's what deep in stands for is capable of doing just distributing or decentralizing as we just discussed. A lot of resources to scale up compute and solve problems in much more economic ways but ultimately democratizing a lot of the problem solving so that you can allow more people to participate in the worldwide ecosystem of technology. So that's super cool. I want to thank you for joining me out. I want to thank everybody who showed up for this talk. We got nearly 600 people watching next time if you're watching this on a replay, which is probably 95 98% of you stop by when we're live ask a question. We love to have the engagement from the audience like we had from a couple of folks today makes it a little bit riskier but a lot more fun as well to have those conversations. So again, Al thanks for being here. This has been a blast until next time everybody take care. Thanks.