Originally Broadcast: March 07, 2025
AI Generated Content in Game Development
🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/4855526410551296
Jon Radoff: Hey, welcome back everybody to the game development live stream. Sorry we got a little late start today, but hey, computers, you know. Anyway, we've got a really great talk today planned. We've been talking for weeks now about artificial intelligence here and there in the channel. In fact, I think we made some promise along the way that we wouldn't talk about AI every single time. And boy was I wrong about that because we talk about AI every week. It's just the unavoidable topic. So we just decided to say screw it. Let's make a whole episode about AI because that is something that everybody wants to talk about still and it's continues to be something that just changes every day and there's lots of interesting technology around it. So I am Jon Radoff. I'm the CEO of Beamable. We're a gaming infrastructure company. Basically the game engine for the back end of your games. I'm joined by Paul. Paul introduced yourself briefly.
Guest: Hey, I'm Paul Stefanoch, a game designer, creative director. I currently work at KING, but by voices my own.
Unknown: I don't speak for my employer.
Jon Radoff: Thanks Paul. And we've got our producer Oscar here who's going to keep us out of trouble with computers. Don't let me press the go live button anymore because that messed us up. But we are live on lots of channels where on Facebook, LinkedIn, YouTube, Twitch, X and we're already getting people streaming in here. But we have a couple of really awesome guests today that I want to introduce. So first, I'd like to introduce Avi Lottner from Floyd. Avi, tell us just briefly about yourself what you're doing in AI.
Guest: Yeah, so I'm one of the founders and the CEO it's Lloyd. We're combining AI and the procedural techniques together create the pretty assets and the pretty scenes for mostly for game developers.
Unknown: Awesome.
Jon Radoff: And our other guest is Guy Gadney from Charismatic AI.
Guest: Guy, tell us a little bit about that. Thanks so much and hi everyone. So we're really focused on I guess narratives and the sense of bringing something which is sort of augmenting the gameplay. We're really interested in how AI can create. So not just I suppose that idea of doing the same thing that you normally do, but faster or what of cheaper or whatever, but what new things can we create with this that we couldn't do before. So that storytelling and new stuff is how we're exploring things and I'll show you some stuff later.
Unknown: Awesome.
Jon Radoff: To kick things off, let's talk a little bit about the whole idea of procedural content creation versus AI and try to maybe place a box around some of those different concepts. Obviously if I understand it correctly, Floyd is actually mostly or all procedural parametric model creation. Maybe you some generative AI for prompting or something that could you maybe help but provide a lay of the land in terms of how you think of it and then Floyd, how you're approaching
Unknown: it.
Guest: So we have both, we have like the what would be called the classic AI generation which are in classic. It's so so new, but right, it's using nerve orgasms, sliding and then we have our own way which is our metric engine, procedural automation on top of that and AI just for customization and prompting from translating texts into actions in procedural and in parametric. And so that's basically how we do it and combine it together whereas like the difference is that in procedural it's pretty fine automation, right? It's automation, the going for a recipe that an artist, a technical artist or developer has created for it whereas AI, it's kind of generative, not predescribed automation where statistical paths are created by a machine and then it creates without a script. And yeah, like I'd be happy to share more but like yeah, maybe.
Jon Radoff: Okay, well that's a little bit about maybe how they're different. I'm going to want to drill in a little bit more about the pros and cons of those different approaches at least as they are today and maybe also how you're thinking about how that's going to continue to evolve in the coming years. But this is also a good moment for me to mention that this is always an open conversation. So if you're watching the stream right now and we've already got the first few dozen people wandering in here to learn about artificial intelligence, proceduralism within game development. If you're interested in these topics, you're welcome to join us. Maybe you're the head of a company in this space and you want to tell us what you're working on. Maybe you're someone who applies it, maybe just got questions. So first of all, you can post comments on any of the places we are. Facebook, LinkedIn, YouTube, Twitch, X. We really, really love it when you share a comment that lets us continue to have a conversation around that. So you can guide the conversation. It's not really about me and Paul. We're just posts like hanging out here pulling people in and I don't know. Frankly, we want you to do the hard work for us. Ask the good questions. The other thing you can do if you're up for it and I love it when this happens, you can actually join the stage too. So as much as it is us on camera talking amongst each other, you can jump into that as well. So post a comment that you'd love to join the stream. Oscar is standing by watching comments. So if one of you posts that you want to jump into the conversation and be part of it, by all means say that you're interested, say what you want to talk about live and on the air. Oscar will get you a Streamyard link and you can join us on camera. So you just need a camera in a mic. So with that preamble, I want to turn it now over to Guy. Talk a little bit more about what you're doing and specifically, like are you using proceduralism and some of the things you're doing? Is it all generative AI and how do you maybe classify the difference between those two before we kind of go back to those big picture questions of where the market is going?
Guest: Well, I think it did right and it's worth winding back and doing some definitions of why you would do one or the other. So in real simple terms, I say that if it's a generative piece, generally, it's something where you've sent something over there and the pendulum swings towards automatic. Procedural sort of as I was saying is that there is more, you're holding the reins more. And often what that means is you're controlling what you want out of it. And what we, and practically what that might mean in a game and the stuff we've worked with Floyd is that you have a particular style, a particular style that you want to make sure that you're there medieval castles, but you want to riff off that. So that starts from the, from a core set of assets. And then the riffing off it is the procedural piece and your generative, but you could generate.
Unknown: Almost like improvisations off that core, rather than wholly generative, where you go, just give me a castle.
Guest: And then you leave it to the model to sort of improvise out and you couldn't get any sort of any sort of model. So my sense is that as we start to then look at where the human and the loop is in all of this. The closer you moved that dial towards programmatic, the more human, you know, creativity is involved in that. You might have a very clear artistic style or you want suddenly to get into a scary moment or, you know, a happy moment in a particular experience. And that is going to mean that you do you pull those reins in that is going to reach that pendulum towards programmatic. And then you can let it out again generatively. And I think that to combine these two is is where it gets really exciting because you're starting to almost like conducting an orchestra. You're sort of going, OK, I want some drums or now on some violins or now on these sorts of things. And at the end of the day, that's sort of what we do is creators. We bring in these different, we different assets at different times. So I think, you know, that definition is important, but, but the resulting of it is and how you balance those, the sort of a new one form.
Jon Radoff: So Oscar, I think we have some content to show from some of the stuff that guy has worked on it. Are we, are we ready to show any of that? Because some of the stuff is so visual like we want to talk about these things, but we should actually take a look at some of these things so people get a real sense of what's happening in this space today.
Guest: Erick, give me one second. I've got that cute up. There we go.
Unknown: What topic do you wish to discuss in tonight's stream?
Guest: My friend, you're in the city of Glasshaven. Music has the power to connect people and tell stories. Don's is a great way to express yourself.
Jon Radoff: So, Guy, first of all, here's what I love about what you did here. For a couple of years, people have been talking about, oh, AI hallucinates too much. That's what's bad about it. And you're like, that's what's great about it. We're going to make that the game. So can you comment on that? Were you thinking the same thing?
Guest: You know, busted. Yeah. You know, I think we, we leaned into this. I've got to say, so there's a bit of an origin story with this project, which is that we started working on this before GPD even launched. So a lot of the code was, you know, really quite hidey odds.
Jon Radoff: Chat GPT or GPT because we all were seeing AI dungeon. So you were starting it even before like AI starting it way, way, way back.
Guest: And the two guys, and initially the even then the two guys said and Connor, who's I dear originally was to look at this that when we then brought the seven house and then built it up from that. The idea was was that you should be able to have this sort of dream concept tell the game your dream. And then that would then build the world. It would build the terrain. It would map the terrain. It would build the buildings. The buildings you need to be able to walk through walk up steps and they go round and then reach a sort of level that you walk out to. So architecturally proceed is really hard to do. The characters we have have sort of it depends, you know, it's randomized. It's not this never consistent, but they can be up to 50 different NPCs in there that are spawned differently each time. That conversations you all have those characters you would never have met before and you will never meet again. So all of these things are really very fluid in terms of what it what what we built the story is interesting because we were writing we were creating this as open AI was evolving its GPT and we were in touch with them very early on about GPT and then 2022 elements of chat GPT. So it was a real sort of labor of love on that and I suppose the key thing was that there was this vision of this is how it should feel, you know, so the font look that that was important. Then we got to meet sloid and and we were working again very very early on as as I'll say, you know, as they were in sort of alpha beta we were we were cooking how do we how do we connect this stuff together. And so some of the buildings that you see in there were were built using collaboration with sloid and technically speaking what we're doing is if you look at the building specifically we will have a series of tiles for specific.
Unknown: For specific styles for a particular type of building and though that if you like is the in the database is sitting in the application we will then procedurally generate that building mesh you know in the construct.
Guest: There is elements again in generative space in that in that you have we have walls with graffiti on that graffiti is coming from Dali so generative that is never been for never seen again. And then the conversations are all real time so anything that you've dreamt about those conversations and one of the key things that I love about that and everything that we do in everything that we do is that you get this sort of sense of I don't know NPCs and chat box and all the stuff they're all one to one so it's like you walk into a walking down streets lady where you watch where you're going when you bump into them it's like oh man. Whereas actually if you had something narrative which is more where you know you might get close to an NPC and it leans into and goes psst one hair secret that sort of stuff is a provocation into the narrative so we wanted to write that in as well. So that's that that combination of of I suppose the procedural piece which is there's a design style we want to build it around certain parameters there's the set code in that unity build okay so already that set. But then calling out to these different engines at particular pieces for particular reasons creates that a theory and John your dad right you know we we cut ourselves some slack by calling it a dream you know because. It can can hallucinate a little bit but that's half the fun of it.
Jon Radoff: Avi so one of the questions i've been asked by vcs asking me continuously every time they look at a 3d graphics generative AI start up the last couple of years is is basically is 3d is 3d model creation is that something that's ready for prime time like our. Now you're not exactly doing that so I just curious your your take on that like where are these technologies today and where are they going.
Guest: So you'll be interesting to also you pull on that maybe just from the side of king and maybe other game developers were seeing increased adoption I think of a I gdc 24 or you ask people with the rest about adoption so about 50% of the kind of game companies abuse they are but it's mostly around. And then it's the same as well as the other game companies that are in the business side maybe co pilot and maybe some some to the images this year you're already seeing higher adoption in the server that was done for 25 but also higher antagonism really what it comes down to when when you're looking at like completely generated just like a fusion right the machine learning generate every pixel. In like g's flat in nerves. This is the machine learning network generates every cut out point and creates the mesh you've seen that it's come a long way both in time to production any topology but I think most professional artists would say that they would not use it yet in games that topology is not good enough. That it consumes too many polygons and then it's really difficult to change like if you it just doesn't fit into professional workflows because if you want to make a change to a model that was not done that way it's a lot easier to take parts to work with the UVs whereas in in completely in that measure that you get out of g's flat it's like try another prompt in maybe you'll get something better or maybe not. That's basically what you are working for really really long time between things up so I wonder if we're going to get there but I'm a big believer in in a combination with procedural to really get out with the professional school they use.
Jon Radoff: You pointed at the polygon count is an issue and I wonder with things like Nanite in Unreal Engine does that does that fix that issue I granted I know that maybe people aren't running Nanite all over the place because there's concerns over which devices that can actually be deployed on issues around limiting your target audience but it kind of solves for this problem of complex geometry within your game. I don't think unity has a great solution for that today but one could assume that this will eventually be figured out across the board what happens once we don't care about polygon count in the source model or will we keep can be concerned about that.
Guest: It's not just all account I mean as already in implied like it's also the structural design like different different workflows take different approaches right you know and when something takes a different approach the near approach it takes a little bit more time to catch back up with it I mean I guess I just I'd like I'd argue a lot of what I was just saying I think in my experience and experienced others I speak to the big gate for taking those tools and running that material straight into a you know a triple a game or having a triple a game. You know I honestly is the I don't even know this is true today because the stuff changes week over week right but I think we're most of us working on triple a and quadruple a games are thinking. Are thinking. Yeah exactly. But like it's more we have very exacting standards for our pipelines and we have very exacting standards for the presentations of our IPs right and you know we need the hallucinations you know we need that stuff for creativity we see that and it's very desirable for the same time like it's a it's a real it's it's as it's already indicated like the rework time often exceeds the the value time a lot for us but that's like I said that that feels like it's changing on a week over week. So I don't know like where are we this week.
Jon Radoff: I'll have you right in the center of this technology well how do you think it's going to evolve over the next say two three years.
Guest: I think like when I hear an objections and and we are here it all the time from creative directors and artists like I always try to think. How much of this is it like I just don't want to change my workflows and it is like because it needs adjusting for something new. And how much it's like the technology just doesn't cut it right it's not good enough right is there where are we at that point when I think we're the point like where some technologies are already. One of the switching costs and the studios that would adjust to it would get a lead start and not that everywhere right so. But like yeah I think I think it's getting there that said like. Like the classic generation has like big limits like it doesn't know how to create the. Pretty for parts right if you want to do is an artist you want to change things it's not possible right now it cannot create. To a first of all buildings right with guys like you want to go inside the building you don't want a loading screen and then something else to. To happen and and there are limits like that that I think are. Still very far away even if the topology will get cleaner and instead of taking two minutes it would take a minute to generate the and ask it to soon like. Those are things that are here's a way to have to require kind of like a different mindset and a way to to think and create and build their own networks. If I can add into that because I think it's a really good piece and and also you know the thread Paul is talking about triple a games and quadruple games which I love. We see this sometimes when we're dealing with IP from Hollywood studios where there is a very tight restraints on a piece of IP so you're looking at like a pyramid structure where you've got a small number of incredibly professional high quality high value IP structures.
Unknown: Now those games we know that fewer and far between because the production cycles however as you head down the pyramid into you know for one of a better weather in the area or you know handheld devices or whatever it is those are those restrictions are loosened a little bit there's a bit more playfulness with that there's a little bit lower expectation of.
Guest: The quality the fidelity the photo realism of the graphical look and I I'm interested like there is there is no hero yet no sort of angry birds moment in like a moment in a mobile that's come up of this yet we've gone I think you're referring to I dungeon before which is a I think one of the sort of moments people look at and I see how that works and we're starting to see more of these so almost like it's fine there's a finding there's another sort of little. Another sort of little nascent genre maybe hasn't got a name yet of playfulness around the edges which where that sort of that drop in or that difference is is interesting you know so I think there is a and then the other thing I just always say is never say never. It's just so fast like I call you this in the next days like oh done it you know.
Jon Radoff: Well it seems like this ability to just spawn dreams is getting closer to the idea of like the holodeck from Star Trek and because you're literally asking me what I wanted to know about which you know I think about the holodeck it was go inside the holodeck and just say what you want and then if it wasn't exactly right you tell it to adjust which is to me feels very much. Like the prompting experience that we all have just going into chat GPT or using something like cursor and clawed and tell it this is what I want from my program not quite right here's what I want to change and it changes it for you in real time.
Guest: Yes John the only thing that that that where it really struggles is consistency. It's great for it's like the chatbot analogy maintains is the what a chatbot essentially is a question and answer session so it's a very short little moment but if you're building a game that has gameplay for like four or five hours that consistency of character of narrative of visual style of audio it struggles with and that I think is. Is one way them sort of the message in the medium don't quite don't quite match up you know so holodeck great for a moment of the simulation is like hey I mean this particular room I've gone back in time gone forward in time. Stay there for a little bit and I think that's where that procedural piece starts to come in because we need more direction. I mean I couldn't agree with you more when I was talking about quality I mean I actually do think that the quality of these tools is you know is super great like like it's the it's that fidelity over time that's the you know especially when you start talking about intellectual property right like you were saying like like people people intellectual property owners get very fussy over the very you know pixel per pixel existence of their product and including proprietary one so yeah that consistency thing is is really. At least in my world one of the biggest things holding us back right but I think again it's it feels like it changes every week totally. Have you guys seen Minecraft AI oh yeah so we just reminded me of the consistency there it's like there's a funny video of somebody like playing at tropes so for the audience it's a it's a minecraft game but without game engine mechanic it's just. Video that's streamed at interact video that's mid I thought mid journey at 24 frames a second. Exactly 20 frames a second so it's a bit slow but it maintains some consistency right and so the world saw our short videos they maintain some consistency but there's a funny video where like the guy spins free. He's free to run in 60 and looks around at their days is when he looks at one time and then he goes back and looks again at the same view and it's just grass was out the days is are gone and I mean it's improving you do see sort of videos that like and others that you do sequences is and some people are talking like forget about making games will make videos like but like for. But but if you're making a TV series right you need to go back to the same settings again in the animated TV series if you're making a multiplayer game or you need people to see the same thing from different angles there's no getting around creating a 3D environment and when you're creating environments yeah that's okay what's going on why are these all where did all these fans both come from what is this a horse. No no it's a sheep. No it's a bunch of sheep.
Jon Radoff: Yeah yeah so the kind of laugh at that weird hallucinatory effect so for anyone who doesn't realize this is just this is a generated video interactive video not a game engine so that's kind of interesting but could you three or four years ago we were playing around with stuff like like. Wasn't dolly who is like just these up those various diffusion models that you could do to do image generation and it would take like 15 minutes or an hour or two to like generate an image for you and could you imagine three or four years ago someone saying. To the average person I think there are some people that were definitely in the know but. Hey we're going to go from one hour to generate the image to in just three or four years will generate it 24 times a second and it'll be an interactive video like that's pretty impressive that that's how much faster and more optimized these models already are said it gives you a sense of a trajectory that could be pretty amazing yeah where is that can be another three or four years.
Guest: So where so I'm a game designer not a not a technical AI expert so what I want to know is when you look away from the daisy like why isn't there a second agent running behind doing you know basically doing the reverse like looking at the image that was generated and sticking that daisy into the database right you follow me like why isn't somebody reverse harvesting the dream and codifying the dream as we go. It's yeah it's really interesting I think to that point it's probably because that person hadn't thought about it you know that's a really good idea you know the speed I the speed is speed of every of this is so you know my career and sort of technology and creative technology I've never seen it this fast ever over 30 years.
Unknown: However what I say to people when you know when we're talking to groups about people is that it can be very can feel very overwhelming with AI because if you said every day there's some newsfeed being being being actually the patterns the longer term patterns I think follow two key things one is rather than thinking about AI think about automation and then and then
Guest: and then pull to your point okay so if you're a game designer you're starting to think about automation in that workflow and where might that fit so you're really on happy ground because you're thinking about your pipelines and then think about automation. The other thing which works it goes back a little bit john to what you're saying is is to chart the impact across time and I you know it's obvious to say but it follows the same thing like like internet did right so early days of the web you're on text then you're on stills and then tiny little bit of the way you're going to see what you're saying is. Early days of the web you're on text then you're on stills and then tiny little videos and then it grows up so it follows the bandwidth so I think we can assume that whenever you're looking at limitations of the moment whatever the limitations are whether it's 15 frames a second 20 we will get up to 60 frames a second or whatever you know biological landmark we need for our eyes. So it's like a lot of the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on the things that we've been working on and we've been working on and we've been working on the things that we've been working on and we've been working on and we've been working on the things that we've been working on and we've been working on and we've been working on and we've been working on and we've been working on and we were receiving messages from someone else and waiting for it. Under that, it's fluent conversation. So I think there is as we nudge towards real time, there's probably I don't know what it is, but there will be a psychological barrier where you know, and other you are mentioning things like render time for a particular asset.
Unknown: So I think there is as long as it's either pre-rendered just before or close enough to real time that we believe it's real time, that's exciting.
Guest: But these are barriers that are just plowed through every week these sort of limits.
Jon Radoff: So if we can, I would love to show off sloit and so people can get a sense of what is it like to work with sloit. But while you're thinking about what you want to show in that respect to Avi, let me just remind our viewers of a couple of things. So first of all, we've got over 100 viewers here in the live stream. Sure, well, thousands on the replay here, but this is why you want to be part of the live program. You can actually ask questions right now. You can post a comment and we'll surface that right on the channel and we'll talk about it. So you can be part of the live experience here. So if you're seeing this in replay, come on live next time because it's a real opportunity for you to interact with leaders in game development on the technology side today. We've got two amazing leaders at the forefront of artificial intelligence as it applies to AI every time we do this, its studio heads, its technologists, creative heads. It's people that are really building product, whether it's the technology side or the experiential side of games. So we'd love to for you to be part of that live conversation, which just means you have to be here when it's live. So we do this every one PM Eastern time weekly on Thursdays. So join us. You can also be part of the live conversation in video as well. So if you want, Oscar will get you the link. It's a little bit of a secret link. You get to be part of the club here and come on and all you need is a camera and a mic and you'll be part of the live studio experience. It could be that you just have questions you want to ask or maybe you've got your own AI project today that you want to share with everybody. That would be perfectly great. Now as obvious continuing to work on just preparing the, you know, it's a lot to show everyone how this stuff works. You said something really interesting guy before, which is this idea of playfulness at the edges. I think we're the exact words. I really love that idea because what the thought it inspires for me is a lot of the application of AI here. It's not going to be just about say making games faster, the faster horses analogy to invoke Henry Ford that that is an application of AI, but it's not going to be just about that. And it's also not going to be necessarily just about taking games and doing more of the same, but AI does it. So it's not going to be necessarily be Baldur's gate three, but with AI implemented NPCs everywhere that you interact with. I share no judgment actually on whether that's a good or bad idea, but you're talking about something different, which is really a whole almost maybe a whole new category of product. They're kind of game like they use AI, they have unexpected elements in them. Can you develop that idea for us a little bit more this playfulness at the edges, where are we going with this?
Guest: So, you know, I bring in some of the work we're doing with like Hollywood studios and actually like video entertainment space. And there's a there's a there's terminology, which is rising a lot in that video generation space of around narratives of liquid stories. The idea that the story isn't fixed in its space, it's it's playful and you know, it's something very, I guess, more common and and and we in the games industry all sort of find quite useful. The idea is is that is that what I who I am as an individual player and how I want to be within that world can influence the world. And an example of this might be where in a state of 3D game and I'm I don't know I'm running through a town, a sort of western town. And in a standard game you may run through and that mission is completed in you know a few minutes. There are two ways just off the top of my head where I might change that one is that I stop in the bar. And if I walk through that saloon door and I go over to the left and I start speaking to the people and by the way voice in voice out like we are here or like we are in a co-op environment, whatever it may be. Then the people on my left and let's say there are 10 of those are now my friends because I went and talked to them. Conversely the people on the right are now not my friends because I ignored them to begin with. And what that then does is it starts to trigger because those are now memories in that game it starts to trigger and cascade because I have now influenced the way that that world exists because I've interacted with those characters. It's a bit there's a parallel with the grandfather paradox around time travel. It's like don't go and talk to anyone back in the past because it's going to influence. It's that it's that piece. There's another piece which is which is that the world itself can adapt. I saw this with a demo this week actually where as I walk through the town which has been created by an artist that town can change visually. So I could have that chat town suddenly changed to black and white to a sort of charcoal look. I can have a day, night, change things. That's not about in a programmatic world you'd have to create all the tiles to change that or the lighting would have to change what actually happens with the AI is this completely changing that entire that entire model. It's holding it consistent but it's layering it and viewing in a different space. And you know why would you do that. It may be again, a thing that Paul wants to do as a game designer because he wants to do something interesting and change up the world. And if you want to be an experience of what it might look like in the movie land, I mean sort of Spider-Man into the spider-verse. You know, you suddenly got the space where you've got this incredibly confidence of different styles and things and it's done for effect. So again, I go back to it almost like the deployment of AI allows this sort of fluidity and playfulness not only from the game designer, but it allows the player to insert their own character into that into that game and evolve it. You know, it's difficult to articulate because so much of what we do at the moment, you know, is telling the things about it and we need to show it. So I don't tell what these as we see these tools are evolving so fast, what it's great. You know, the game designer, I think, is God still in that space, but it just allows for more freedom to movement because that those some of those technical restrictions that we used to like polygon count or frame rates or whatever. And as we're seeing, we're starting to have a little bit more fluidity in that space. That's the long and short of it. I hope it makes sense. It's difficult to articulate some of them.
Jon Radoff: Let's do some showing instead of telling. Let's look at some of these tools so that people get a sense of what it's like to work with them.
Guest: All right, let's pull up. So this first one here is an example of using sloit as a plugin within unity and it's an example of prototyping. So the idea is not to create the finished asset, but to run a concept of how it would look, how the back alley of in this case, most cool center right in GDC would look like. So here I'm using and doing this only with text prompts using the sloit plugin. So I've asked to create a barrel in silver and I said which type of colors I said red silver rings and red barrels and you know, it comes out in a because of the method that we use that it's like assembling pre made parts together and texturing them in real time. It just takes seconds instead of minutes here. I'm editing this factory with some text. I'm just asking to change the roof to add the H back system. So it looks a bit more factory like now. I think rusted lockers. You see it's coming along with a rusted iron texture and doing some edits like I want to be larger and to so they, you know, they don't look identical remove remove the legs. And I'm starting to create like right in a few in two minutes right now, I have a beginning of a scene adding another building. And fire extinguisher and that's like something I actually started this from like doing it in demos just to show it around like sometimes two sequences like that either creating a house with furniture or living room or a bedroom or or something like that. And yeah, and that's that's one use case that you you have where you right when you can have that speed of creation in seconds and editing in seconds. It's really effective for concepting and prototyping the second link. Let's see.
Jon Radoff: I'm not sure this would be so if you could jump to the third maybe maybe while we're getting that ready just some questions about where we were seeing there. So when you're prompting it. So it's kind of selecting a base model based on the proft if I understand the way the tech works and then you're producing a model that's parameterized you can adjust it. Am I kind of just capturing internal technical.
Guest: So we have a library of parts made by artists that are also paid for creating them. And then we use the AI to understand text and then to take those parts and assemble it together to the closest thing we can create to that text box. And then we also in real time texture we pull from a library of of textures of hundreds of textures, whether you like stylized the not or realistic and we put it on top of those parts because it's part base the textures are really clean. So they're added and so we can create in seconds we can edit in seconds with the same mechanism and like when we edited even can understand things like style right if you ask for something stylized or continue. We just select the textures that are more appropriate, but it would actually change the shapes it might make the dimension distort the dimensions so it looks more stylistic make things more beveled right so those are all capabilities where we could adjust the parts like how you can parts are assembled together into a shape but also going to every single part that's a parametric part means it's starting code and change small things like the beveling. And together we compile you know eventually a model so the great thing there is it's in second the second limitation is that like where you're doing the just flattening way you could like potentially do anything right and here it's within a space of available parts and so that's why those two methods are powerful together.
Jon Radoff: So the language model is essentially a user interface so to speak and then I don't know I'll guess that you're doing something like a vector similarity search on the desired object or model or something and then surfacing that from this library that assembled from developers, but it's structured in a way that you can now make the adjustments in it.
Guest: Yeah, the very highest level the eye is either a language model or we use other models we use something that the batches text to image snippets as well the level below that we're actually using procedural to say a roof needs to snap on top of the wall we want to do it fully AI that part we are not like we need to give assembly instructions in procedural methods. And the very lowest level is a parametric engine where we have every shape with its splines or primitives we only store it in code. So that's why we could like but because it's code it's very easy to manipulate the base shapes right in the shape would be a column in the building that we see over here or yeah. So it's where we look at it now. So those are a few examples. It's not from the way like everything is actually from the web app but like we placed it here you know with like but this is a. These are like some examples now at the level of an asset right and not prototyping where I think like you typically see we see in this taking this and. And putting it in their games directly in what we see with the professional users they like to take it as a starting base and then they put their own touch on top of that and like you know if we are having. A professional artist use us and tell us that they've like saved 60% of the work we were actually really proud. That's a good outcome for us.
Jon Radoff: And I'm kind of sensing from what we were watching that when you include your adjectives and they're like rusty or also it's sort of intelligently applying a texture map or something to the various surfaces and I kind of.
Guest: It's interprets text into either texture material right so material could be transparency or metronauts or roughness of of of the shader and shape shape shape key batching as well and the shape itself right so those are all the. Taking all those text inputs and interpreting them into those very various ways of manipulating.
Jon Radoff: Paul and the work you do are you seeing people eager to try to apply this kind of technology to their workflow.
Guest: Oh absolutely I mean I think you know not just my own but talking to you know other designers and other studios you know at an industry level I think you know I think the way you just described it like it's like it's a starting point at this point right you know but especially when you're building large quantities large volumes of asset the starting point is a real a real time saver and yeah I think yeah I mean people are absolutely using this already all at all levels of all of the. All levels of studios right but you know I guess at the highest levels there's still a long delta between what you see on the screen and what can go through the final the final pipeline process right so yeah I mean everybody using it.
Jon Radoff: I think the really important thing is this stuff actually does work it's no longer just. Like I think that was the early take on some of these things oh this is just going to be another one of those things it's over hyped can't really put into practice but I think what we just washed is very practical tool that people could use especially for prototyping.
Guest: Yeah I mean I don't know anybody that isn't doing it at the at the foundational ideation level but I know zero people that are doing it as a complete total pipeline right I don't know anybody that's doing it hands off into and.
Jon Radoff: Pipeline yet yet we don't have level five AI quite yeah level five is you just dream it and it comes into being exactly what you want.
Guest: And depending on the complexity of your pipeline and the number of IP holders and the rigidity of your you know your final thing different people are in different places of that of that spectrum right. Well I have a question I mean from your perspective where things stand right now who's going to see the immediate benefit to these technologies you know off the shelf or like retrofitting stuff that they're getting. From you know civic a or something is it going to be like the people in the triple a space or is it going to be the independence. Who are more prone to experiment and to accept lesser fidelity in pursuit of an end result which is a shareable experience you know it's like people will do stick figures because stick figures are compelling in some level like our indies going to be who like you know hold this banner up the highest first to try to get stuff out there. I think from my side I think there are the the short answer is both but that it's but doing different things so you know I remember doing a sort of round table so close round table that Devcom few years ago now like three years. And there were a couple of I won't name them but global triple a quadruple a studios and they had said look you know on the current pipeline on the current. Converbalt of that process they especially Paul is using that early stage the sort of pre pre the pre the pre production if you like stage. Especially in the production of early stage artwork that let go 50% of that team and into predictively that was about 50 people so that's a that's a chunky amount right so at that level I think what you're looking at is you're looking at much larger organizations which are very focused on. I mean their large organizations you know they're typical they're a business their large business whatever industry we're in a large business are very focused on bottom line. And risk for them at the moment and has been for a little while especially around the social elements of of AI in other words the reactions the negative reactions that have happened from you know voice over artists or designers has been a big risk so there's been a little bit of I agree again you know we know everyone's using it there's not necessarily talking about it there's been a bit of a sort of a worry about the risk of using it. Then you shift to the Indies and go well they can absorb that risk because there may be five 10 20 people in the organization so they're looking at how they can compete they can punch up a little bit and get the ideas out faster. And in that sense you know there and and also that risk is less for them but also they can start to think about those ideas that are equally risky you know which may be a gameplay concept that no one's tested yet. And that's hard to implement as we know in sort of you know into a major triple a production pipeline which may last for for years. But I think and you know with we've attempted just sort of anecdotally we've talked to triple a games studios about putting charisma and charismatic into their into that pipeline the problem we found selling it is an interesting one which is that when the game first starts it's got an existing pipeline you know it's like this is what we did three years ago this is what we did. This is what we did two years ago this is what we did one is so this is what we're going to do now now that pipeline hypothetically let's take an example that's three years long right so the project manager and the producer on that particular project they bring up their spreadsheet and they know roughly what is going to go from step zero to step 100. If you try and swap out one of those pieces and go ahead this one number 56 is now AI like whoa hang on you're pulling it plate from the bottom of the crockery pile you know like this is now destabilizing because I don't know what it's going to be. So there is a slight default in the sense that games industries are high risk technical industry anyway and the idea of popping out an existing process with something which is innovative even though there could be could be a positive bottom line and profitability element there is reticent still. And so we found that that that has been it's been surprising for me personally and speaking candidly and personally is that you know again I compare it to like video studios or or Hollywood studios the adoption of it has been more that but I think faster than the game's industry and my sense is that's what it is there is it's already a high risk industry so to swap out numbers 56 in that project manager schedule with a bit of unknown of like is it will it make it faster won't you. So it may go wrong the model idea is is is a challenge so it's sorry it's a long answer to your question but I do think that the impact is is tsunami like it's not a wave like talk about like a I don't know a particular trend that goes comes and goes because this is so pan industry like as you guys are sitting at home and you've got your coffee cup that coffee cup has got AI somewhere in the manufacturing process. It's a really broad it's a broad piece and we know that it therefore it's not going away and the final piece to add on it because I get involved bizarrely in things like governmental policy for AI in the creative industries because it's a big deal and if we look up into like national and international use of AI it's not even about technologies about power you know if if open AI releases a or is is about to release a video model. China will release one couple of days before or couple of days after it's there is a there is a seismic piece around this which is which is sort of which is political so I think those you can you know but then again you never get the white house going hey we're going to build a white house in second life that's really cool we're going to put all this into it but they do in AI so you know we'll see.
Jon Radoff: Well thanks guy we're at the top of the hour so we're going to have to conclude there but I really want to thank our guests guy and Avi for joining us today you guys are welcome back anytime especially for talking about AI but I think you know a lot about the game development process so feel free to join us because that's something anybody can do by the way so if you're watching this in a replay join the other 150 or so people were part of the live audience except maybe out of comment or join us on stage next week we have a really awesome broadcast I'm going to do it. Live from South by Southwestern and Tony perisi one of the Godfathers of virtual reality inventor of the RML professional musician he's been involved in blockchain projects games game platforms worked at unity he's done it all so join that one it's going to be really really cool and we're going to do that Wednesday at 9 am Eastern time and then on Thursday at 1 pm we got a couple of really great indie game developers a churro lamb from daisie on and Renee getons they both worked on a number of indie game projects and they'll share with you their journey through the just the whole messy universe of indie game development so I hope you'll join us for both of those but this is where we got to call it the end for today so guy thank you Paul thanks for being here again Oscar thank you Avi thanks for joining this and I'll see some of you in South by as well.
Guest: Thanks everybody we should definitely do it again.
Unknown: Take care everybody.