Originally Broadcast: May 23, 2023
One of the fastest growing areas of creative expression in games has been the emotes and animations you can apply to your character in games like Fortnite. Yassine Tahi saw this realm of self-expression coming, and created a platform that democratizes the creation of character animations.
With Kinetix, individual creators can capture movement and make their avatar animations available through a marketplace--and end-users can invoke them through text-to-animation generative prompting. This requires a huge number of challenging technology problems to be solved: motion capture with a normal camera that maps to the character rig; adjusting animation to the geometry and physics of an environment; labeling the data in a way that generative AI technology can make use of it; and then making animations interoperable across different games, virtual worlds and metaverse platforms. Some of the games and platforms we discuss include Zepeto, Inworld, The Sandbox Fortnite and NextDancer.
00:00 Introduction
01:30 Democratizing Animation
03:24 Fortnite Emote Revenue
05:49 State of Text-to-emote Prompts
09:00 Deep Learning for Motion
12:44 Motion Capture Technology
15:00 AI NPCs, Inworld
16:22 Constraints of GenAI Animation
20:32 Avatar Codecs
23:40 Exponential AI Improvements
27:15 Creator Economy
31:00 Interoperability
41:07 The Future of Generation Animation
Yassine Tahi: Today there are a few hundred thousand out people that can do animation and it's very sad because it's the the first form of Expression and storytelling in 3D space if you don't have that you cannot make character express themselves so We think it more as a liberation of a very very technical thing that people could do before To many many people that are able to do it So for me is not replacing is like more creating a whole new market on enabling people to do things that they couldn't do before
Jon Radoff: I'm with you seen Tahi who is CEO and founder of Kinetics They are blazing a new trail into generative AI with motion animation performance around avatars in virtual spaces you seen
Yassine Tahi: Welcome hey done. Thank you for the invitation. Let's talk a little bit about the
Jon Radoff: Market that you saw so as I understand it You're not in so much the creative part where people are producing games you're focused on The end users the players the people who are in virtual spaces and really empowering them to be creative through performance and motion Can you take us a little bit through the opportunity you saw on the market and what you're trying to accomplish with generative AI and avatars at Kinetics
Yassine Tahi: Yeah, sure, so we've been working on kinetics for the past three years My co-founder being your researcher in AI applied to 3d animation So what we started first kinetics with the idea of enabling anyone to create motion We started with video to animation and we recently launched text to animation But we prefer to say now emotes than animation So we kind of differentiate between these two for us animation is what you need to build the game So let's say you're building a world of work raft you code all the movements for your gameplay And that's the animation so basically a game developer will be working with animators And you bring this content in game and you push it Then there is a second market which is the market of emotes which are the movements you buy in game The most famous one being the fortnight emotes they making hundreds of millions every year there And this is where we focus so This market is When the game is launched is it's the in-game purchase cosmetics market and we are operating in this market to enable anyone to customize and Create whatever moves they want and bring it directly in game in a 3d space
Jon Radoff: So first of all, I had a game that I operated way back in the 90s It was called Legends of Future Past and we like a number of other like multi-user dungeons We had a slash the moat command so you could Act out in text Whatever it is that you wanted to do because there would be I don't know how many verbs we have probably a few hundred verbs Will recognize in the game But of course everybody ultimately wants to be able to perform things in a way that aren't constrained by the canned responses So we had slash the moat and it feels like that's been something missing from these online spaces, right? You don't have that complete expressiveness But I want to I want to return to that in the expression But let's talk a little bit about what you referring to on fortnight there because I want to double click on that for a bit You said fortnight is generating hundreds of millions of dollars of revenue off of The emotes that they do have in the game can could you elaborate on that because I think people Me not even realize how much of the income of a world-like fortnight is coming from this kind of player
Yassine Tahi: User expressiveness. Yeah, sure. So basically virtual worlds Used to monetize through advertising and we've seen the past few years that advertising is something that is a bit crushing And we see that in game asset especially the ones around cosmetics are the one that making success of games like fortnight And the forecast for the next few years show that it will be growing and growing even more Emotes is one of the most liked categories By the users because it's something very social that they use to Interact with each other. There's also other categories obviously like wearables and The skins for your avatar and the fortnight has been Very popular on the memes and bringing also real moves in the real life Into the games which also created some problems. There were some lawsuits around fortnight and a lot of artists claiming their rights to ownership and earning a money from the money that fortnight is making But fortnight always won the lawsuit because there is no protection of few moves. You can Protect the choreography, but you cannot Protect few moves So yeah, I think there is a big opportunity there also for working with creator artist to help them monetize across different virtual world and games And bring the popular culture and the internet meme culture into this virtual world And kind of bridge kind of by dance slash tiktok with
Jon Radoff: The virtual world and games right. I want to return to the monetization aspect in a bit But before we go there Let's talk a little bit more about how you're actually enabling this creativity So You're allowing Basically text to animation text to motion And my understanding is is there's a little bit of a delay before it can interpret that and actually implemented in animation But can you talk about where you are now with this technology and what is the scope of the expressiveness that someone can have and and where where do text prompts play into this yes
Yassine Tahi: so There are two parts. There is the AI part which is the customer facing part where you type something and you have the emotes generated But this is like Tertically it sounds easy But the most difficult part is to put it in production in a 3d world So our model is a B2B to see so we work with virtual worlds like sandbox the petal currently to integrate our technologies in their engine So basically we're putting our SDK to be able to communicate with their avatar rig and to be able to kind of stream the emotes from the cloud to their avatar system so as a user you'll be able to prompt or Input a video because we're more working on input general input can be image text Also music we're working on it on rnd, but for the moment we announced and we have released Video and text will be released soon and this will then match the avatar rig in the game and make your avatar move That way so we're providing the end-to-end infrastructure for the virtual worlds So that the end users will be able to benefit from this In game Today the state of the art of the computing as you can see in stable diffusion or like in the mid-journey is you write a prompt It's a computing time depending on the queue and how many people are using the service because there are many servers that are allocated for that and then it will compute with the AI and send back The animation and then play it in your character For now there is a bit of delay and we're working on optimizing to make it as smooth for the user to be able to write something and then to play it Obviously, it's not like a research on library. It's like generative. That's why it's taking this this compute time for the moment
Jon Radoff: so building big Generative libraries is very data hungry that that's why things like mid-journey have started to work really well Because they have this vast reservoir of data that they can go to on the internet pull in Art and graphics and learn from it and train their models and then you're able to text pump What I'm hearing from you Though is there's this text prompting aspect which is an ease of use Kind of interface for bringing Various kinds of content to life on the screen, but you still also need the data A big part of what you seem to be doing is Dealing with the data problem in building up animation catalogs by having the ability to actually look at video and also map video To the way that maps to a particular avatar and am I understanding it?
Yassine Tahi: correctly
Yassine Tahi: Almost let's say that there is different data sets for the training of the different algorithms We're using animation data But to train our deep learning algorithms on video versus text is not the same Because video to animation you will need Data from a camera and the motion capture exact correct data in front of that so you're able to under like the AI will be understanding that this video Match this movement and then on a huge data sets it will learn from that and able to replicate a movement when you play it on a video With the text it's different. It's it's it will take animation data But it requires labeled data about emotion about gesture extra So it's we really rebuild The data sets we already had to be able to retrain and to be able to put that in production in terms of AI In terms of SDK and how we interact with the avatar the end of the funnel is the same It's sending an animation which is retargeted in The avatar rig and transform into an emotes because I said before animation emotes are different because there are some standards you have to apply some Filters of for example you cannot move towards a certain distance because if you're in the game and you play an emotes And you jump from a cliff it will mess with the gameplay So there are some constraints that are applied to the emotes and also moderation Because we want to ensure especially in in game that are under like for the underage people that there is no
Jon Radoff: Bad movement that is brought in the in the game right I can only imagine the issues with that Let's let's try to connect the different pieces Let's go back to the creator aspect of this So I'm a I'm a creator and I want to make a new animation that I would love to share in virtual space across these different platforms that you're talking about What do I do just kind of bring me to basics like how how do I capture my video and my body movement and then
Yassine Tahi: What's step one so Depends That we have multiple Tools we have a tools for creators And that's what we are working with the petal to integrate That's for the community of creators that want to create emotes then to put them on the libraries of the of the game For example, and that's let's say the target of of creators This is a cloud-based platform. It's accessible already on our studio. It's called kinetics that tech You can log in and you have multiple features. You can input a video. We would extract the motion you can add filters you can edit you can You can blend the different emotes together. So and you have libraries also available So this is like kind of a creator friendly tool and this assets then Can be brought into different virtual worlds. So that's for the creator for the gamer What we are doing now is currently integrating our SDK and integration in game with some of the game that we are collaborating with So that when you're in game you have the emot wheel, you know like in fortnight And then you click on customize it will open your camera up your screen yourself And then your asset will be directly available on your emot wheel in game That's the workflow for the gamers and the end users And that's kind of the two categories where addressing for the for the long in the past the way
Jon Radoff: This would be done at like a AAA game and in a game studio as they would get people into a big studio They'd wear these whole motion capture outfits. There'd be all these light points on it that they capture and then the Motion would be translated into the 3D geometry the motion animation That could then be used to render characters within a studio environment Give me your thoughts on the trajectory of this kind of technology of like that kind of motion capture tech Versus something that just looks at a video camera Where is the future here and and where do you see creators generally not not only for the kinetics platform But are people going to be using these motion capture rigs in the future If they are what are the circumstances and where will they not use them as much yes, so I believe in
Yassine Tahi: Multicamera So we're just to come back we're doing one camera so you can input a video from tiktok youtube It's more mass market for the animator that want to do like more professional quality of animation That is more high-end because the motion capture data with the suits Obviously, it's a higher quality even if you have to rework it because the stability of the fit for example is not perfect. So you have to Have a manual input on the on the motion capture data to extract But for me for the professional capture I believe more that multi camera with AI will be The future there for the professional studios that want to create their own moves For us for the for the creators the more long tail They will use mono camera for me And then the question is not only how can I create the content but also how can I bring this content into the games because if you create a motion You also need to put it on an avatar and to adapt it to the right bones and the right rigs So there are other technologies involved then to put in because in animation that is working on a specific character Might not work in another one So there also what we call the retargeting technologies to be able to Transfer a motion from one avatar to another so let's say that there are different technologies today Mono camera multi camera, but I'm sure that we will get Read of the hardware and we will keep Camera capture on the next five years let's say and on the longer term my vision is a bit different And these are some projects we're working on with our R&D department In 10 years from now. I don't think there will be any more animation at all It will be directly AI NPCs that are automatically behavior So we're working for example now on smart filters and Discussing within world for example, which are doing the chat GPT of Avatar's and learning from a text discussion or from a context Will be able automatically to play some naturally Generated gestures depending on the dialogue. So it's kind of linking our Text to emotes, but the future of the text to emotes will be linked with the emotions and being able to Automatically create motion as you go So you they won't be this process of game developers creating animation from Mono camera on the cheek camera or whatever. They will be game developer integrating an SDK That is driving the motion automatically of the avatar's and it will be AI generated directly so If we think midterm, I think There is still space for this kind of traditional capture and doing what we were doing the past 30 years But with an improvement. So let's say that it's an incremental Advancement, but I think they will be like completely different approach to a Motion in gaming in the next few years
Jon Radoff: So I want to kind of recapture a little bit of what we've been talking about so that everybody appreciates the complexity of these problems because on the one hand there's The motion capture and then the labeling of what motions even mean There's the physics of spaces Right just because you say you want to do an animation It has to respect the actual physics and the properties of a space the obstacles the other objects that are present there Working around that becomes an issue. You also mentioned game design constraints Right just because it might in theory be physically possible if you imagine the 3d area You might not want to allow a player to jump or jump off a cliff in your your earlier example or do whatever it is Because it may not be appropriate for the constraints of a game design system And then on top of that going back to the basic motion capture pieces To get high quality capture still today requires this multi-camera setup with motion capture rigs and and whatnot So being able to get the same kind of quality off of say a phone is a really hard challenge So we've got a whole long list of really complicated problems here On the motion capture side if we think about Single camera setup and I'm just going to speculate for a moment here because I'm seeing some parallels With some of the recent work in moral radiance fields not that animation really is a direct map to the way Moral radiance fields work but the interesting aspect of them Is this idea of you don't actually have to get a picture of every conceivable angle and position of an image To translate it into a 3d model You can get a sparse number of inputs So in animation it seems like we almost need to do the same thing like we need to start filling in The areas of the unknown with the AI models that Essentially integrate this information about the way motion should happen even though we didn't actually capture it Am I in the right track here which is a big part of AI is like filling in the gaps not actually reproducing reality as we saw it but guessing close enough that it's going to be accurate
Yassine Tahi: Yes, you're completely right, but it's depending on what we're optimizing for so That's why it's super hard to compare sometimes different algorithms for video to animation because For example, let's say your hand is behind my back so you cannot see what I'm doing So it would infer many possibilities The question is there is always a trade-off between fidelity to the input video or realism of A fluid and a natural animation so At kinetics we made a choice of sacrificing sometimes The Fidelity to the input video but to respect The dynamism and the movement because we are addressing some people that want to rework the animation for most of them So we want we prefer to a bit enhance the animation and make it looks good Rather than to be perfectly fitting exactly the motion that was played in the video so Sometimes you have to make some trade-offs and these are the trade-offs we are making but Yeah, and if we can push even further on that I don't believe that users wants only and this what we learned the past few years and that's what we are Inputing in our text to emotes. I don't think that users words want always to copy in the reality I think when you're in virtual world and you want to express yourself you want to be able to do things you can do You cannot do in the real life So that's where the smart filters we are like have a robotic a like a robotic filter So you can input a filter on your emote to make you look more robot or more excited so this kind of creative filters for me are also the future of how we will express our self in the virtual world because why be our exact self That in real life in the virtual world. I think it's a bit boring
Jon Radoff: Everything we do online is going to undergo massive transformation in the coming years right right now We're still doing Videos in the old-fashioned way basically right so we do video conferencing just like we're literally doing right now or capturing video But in the not too distant future it may actually be a lot more efficient to use some of these like avatar codex systems Like the one that that that meadow was demoing a few months back Where instead of capturing video we capture motion we map it to an avatar and that becomes who we are in visual in virtual space I guess we'll all look better will be able to look However we want but also now it you're taking it to the next level which is Maybe in addition to the particular gestures we're making and transferring that over the internet over the metaverse will also be able to both have filters that process And modify our motion will be able to add more motion will be able to get up and dance if we want to too So we'll be able to integrate all of this expressiveness into avatar spaces now right now There's still a little bit of a delay in your text to animation system I think you were telling me maybe a minute or so to go through the process of Of making that happen how close to real time do you think This will be and let's just dream for a moment like five years ten years from now How will this technology really work online for us as end users in everything from Online games like fortnight to
Yassine Tahi: Video teleconferencing or whatever. Yeah, I think definitely for now is asynchronous That means that you create your content, but then it's stored that means that you can create your moves and your Signature moves, let's say and you save them and then you can click on a button on or Activate them in game with a Command like you were saying earlier slash dance and you can have your own dance in this command So it's like kind of asynchronous But in the future will be I think real time we we're getting there We're already doing some real time with the 2d as you can see like in Microsoft teams now you can have your avatar So we're already doing it with the 2d and in the 3d I think in the next five year Obviously it will be possible to do it and to be able to do a live performance in a 3d space with a monocamera Or text input But I would need to check with my earned it team if I'm not saying something crazy, but usually you know, I'm always the more optimistic and but I think it will be doable and we're currently also working on like the first step is optimizing you know reducing the time and then the question is do we keep the sample level of quality or do we sacrifice a bit of quality for
Yassine Tahi: uh
Jon Radoff: Computing time and it's always a trade-off. I'm remembering back to the Google collab sheets I used or with like these gand models for 2d image synthesis just a couple of years ago when I was doing those it would take like hours To make one frame now you can go to mid-journey and get a frame You know few seconds really less than a minute and now with some models it really is down to fractions of a second in video Generation we're talking about multiple frames per second to synthesis It's amazing to look at the exponential curve behind how fast some of these models are improving and it seems like that will improve everywhere Because we've got GPUs improving. We've got algorithms improving the Europe problem is really complicated because We're trying to take like one snapshot of reality the way we move the way we interact in spaces synthesize that with animations and then respect the physics of things and i'm also thinking through this research paper that came out Just a few months ago from google is called dream x where they're trying to get abatars to work with um With text to animation prompting Not exactly the kind of thing you're doing but But respecting the physics of space and actually Putting multiple characters together putting obstacles see how they respond to it How do you bring that into In environment because it that seems like a problem at the game system level like the game system itself has to manifest The physics within the 3d engine and then at the same time you're bringing in all of this animation data that they may not have thought about Take me through just the complexity of that problem and how you're thinking of it
Yassine Tahi: So everything it has to do with an interaction. It's always complicated That's why that's why we took i think took us more than a year and a half to solve the fit grounding problem um So because we like the first Like the first interaction and that everyone had all the time is having their feet on the ground So that's the first interaction And it's a very very complex problem to solve For the other aspect of interacting like bring you an external content that will interact with an environment It's something we're not working on at the moment Um and also because it's not serving the main use case we're we're doing uh for the moment so We're first deploying that and then there are i think many many hard problems to solve as the one you say Uh, but i think this will come more from the games studios themselves on the ai NPCs that i was mentioning before and i think it's more papers linked to reinforcement learning how you can place agent That will behave in a space and learn how to interact with the space It's more what we call reinforcement learning which is another branch um of of the ai But i think that will come um Directly from the studios. Um, it's it's let's say that it's sold a different kind of Problem and it's more for the animation of the NPCs
Jon Radoff: Yes, and and just as i'm recalling what i was just mentioning a moment I said dream x which is which is closer to like the run way ml stuff where they're trying to do re styling of of use the the paper that i was thinking of was actually an Video paper on on physics of text to animation prompting um well we'll provide links to some of these things in the show notes that people can check it out Afterwards
Yassine Tahi: Let's double click way back to the conversation on monetization when we're having earlier so you've also imagined this is being
Jon Radoff: A new way to monetize content to create a new form of creator economy where people could contribute animations make content that other people could benefit from in a community You can we talk about that a little bit because this has been a big conversation around AI in general, which is there's on the one hand people who are saying well we're just training things from observable phenomena and content that already exists and then there's sort of the opposite side of this conversation which is well people are taking and benefiting from content online And making it available for for reuse What are your thoughts on that and and why might animation be different and and how are you hoping to solve that with the army?
Yassine Tahi: So first animation is a very complex test that are not accessible to a lot of people today There are a few hundred thousand out people that can do animation and it's very sad because it's the the first form of Expression and storytelling in 3d space if you don't have that you cannot make character express themselves so We think it more as a liberation of a very very technical thing that people could do before To many many people that are able to do it so for me is not replacing is like more creating a whole new market on enabling people to do things that they couldn't do before First and the second is also the legal aspect of it Today moves are not copyrighted and we believe that It doesn't have the city to be but it can be New form of reward for the creators and involve them in the process and that's why we're currently We had a deal with the BBC to work on dancing with the stars to create the moves from the dancer without technology uh and bring it into virtual world we brought the the move of Dancing with the stars to next dancer which is a game we're collaborating with The dance game but we're also currently working with the sandbox on bringing emotes for for celebrities and bringing them into sandbox in the future and There's something that not disclosed yet, but working with big big IPs to bring them into virtual worlds and IPs are a word that now they can also monetize on these aspects and we can work with the brands IPs but also creators are doing super cool moves and that they can uh Put on kinetics platform and then we distribute them across virtual worlds because Obviously if you're fortnight you can negotiate with these big IPs because you have the size to do it But let's say you're a smaller game, but you still want to bring fun content and Original content to your game You can just see us and we will be able to open our libraries of branded emotes That are in collaboration with many virtual worlds and on with many IPs and brands And on the other side for the IPs they don't have to deal with hundreds of games they can just work with us We don't need the artists to come we can just create with our technology and use their own videos or they can set us a video We create the content we improve it obviously because it's premium content So we have our own team that will Re-design and improve the content and then we can diffuse it in all the virtual worlds we're Working on it's kind of the same model as ready player me and are doing with avatars and
Jon Radoff: wearables but for emotes you're touching on a topic which which is a passion of mine around interoperability and and this idea that creators can generate a type of content or script something write code whatever it is that that defines how They're created process unfolds But be able to deploy across different ecosystems interoperability is a super hard problem and now we're talking about like A super hard problem of animation for all the reasons that we talked about already physics game design constraints motion capture The prompting aspect all of that so How do you think about interoperability because I'm thinking of like sandbox and voxelized environments like you know, that's like Minecraft all the way to Yeah game like things like fortnight to maybe even things that are a little more hyper realistic They've got a lot of different issues in their Character structures right how do you deal with that? Yes, so as I said in the beginning our
Yassine Tahi: SDK is plugged to the engine of the game so we have access to the rig system and then we have a technology that is called the retargeting So once we produce one emote A movement we're able to automatically we have like our adaptation for multiple rig systems and today we already support sandbox roguelogues unity and real mixamo Zepeto so and also meta avatars we're working on them so we're adapting our emotes to the most like most of the animation like a rig system So once you create an emotes you'll be able to use it In theoretically in multiple virtual worlds, but we are still seeing a resistance because virtual worlds don't want you to be able to take this content and using It elsewhere, but for us for now all the games will be plugged to theoretically the emotes you created in this game you will be able to use it in another platform
Jon Radoff: Another thought that's occurring to me is one of props and maybe we're getting into future Future stuff, but that's fine. We're gonna dream here about stuff that might take a while to get to but if I take a fortnight for example You're carrying weapons around you've got guns and stuff like that and another environment maybe you're Playing with a ball or something. I'm imagining juggling right so how how do you start also taking it to the even the next level beyond that in terms of thinking about the kinds of prompt That sorry prop interactions that I might have in an environment because that seems For to animation if I go back to what you were describing earlier truly democratizing Animation so that anybody who wants to participate in this can go beyond the few hundred thousand people that do it today Geez, I think I just flagged yet another super complicated part of this technology, right? Yes
Yassine Tahi: It's something we're not doing like as you know, for example if you have a weapon and a game and you play an emote The usually it disappears and you do the emote and then it reappears So for the moment we're sticking to this kind of artifacts to not deal with the assets and really stick to the only animation file Which is already very hard and and in the future will be able to To do also you can think of VFX for example if you do an emote and then you have like Like my idea if we go crazy and think about the future my idea was to say when you create an emote You have like different kind of elements emotes you have like water emotes and we have like our team worked on a fun project with was to create Every time we generate an emote our AI will calculate the spin the the how our aerial is it if you're like flat like jumping and a lot of criteria I need to determine if it's water electricity wind or something and then it will create different attributes and when you play an emote You will generate for example electricity or water to add like of the like when you play an emote you want to As the young people say to flex and to show how you're the best So if you want to then generate like storms and stuff like that it will be generated and My team was happy to work on this to showcase some stuff But bringing this into cross games It's a bit of a challenge because everyone has their own Rendering system VFX and we cannot deal with multiple assets for the moment So that's why we decided to stay focused on emotes and bring emotes to the players on a more social environmental level and yeah The idea was to restrain and constrain the problem we're solving And because it's it's difficult to
Yassine Tahi: to bring
Yassine Tahi: To bring an external asset directly in the game. It's already a big big change Little on bringing multiple assets and VFX and stuff. It will be difficult to
Jon Radoff: To bring cross games. Well, you're giving us a good product management 101 lesson here for startups actually because the fun thing about everything you're talking about is You can dream really big on this and there's all these aspects of animation performance even almost cinematography Like there's all these aspects that you can potentially bring into this and it touches all these super complicated problems of Physics motion performance expressiveness narrative even But to be a startup you actually have to start with the problem you can immediately solve So I'd like to talk a little bit about your own entrepreneurial journey like what brought you to this problem How did you settle on this particular subset of problems and You know just kind of the history the story today take us through that yeah
Yassine Tahi: So before kinetics I come from the finance like VC. I was working with the startups a lot of I launched one of the biggest VCs now in Morocco where I'm from and I was working with a lot of SaaS marketplaces and let's say that in Morocco we're still like kind of taking the innovation and like ecommerce marketplaces that already exist elsewhere and trying to bring them to the local market and the minaregion and I was like I saw a lot of that those kind of startups and I was like I don't want to create a startup like that I want to create a startup where everything is disruptive and new and I think I took the challenge because We're working on since 2020. We're working on AI which is like a new market with the metaverse and virtual worlds Which is also a new market so I think the I Wanted something that is at the edge of what technology can bring and what technology can do and when I was choosing the industry I I wanted to have fun actually and I think gaming is a very Passion like every one of these gaming is passionate I spent my whole childhood playing memory PGs actually and during the covid when I started kinetics I was playing again Lineage to which my favorite game of all times and I was like I never had so much fun So I think I want to create a company in this field And it was like okay, how can we bring technology to this field and I might my co-founder who was doing AI In three animation. I was like okay all the pieces were together to to launch kinetics and I think we made the right This we were a bit early market because It was a bit challenging in the beginning especially to to raise funds in In when you're a bit early and the trend is not there yet. It's a bit complex Sometimes because you have to educate a lot and explain what you're doing And when you talk about complex animation issues to vcs They don't understand what is animation why it's complicated They will just say oh yeah 3d animation is a small market We don't want to deal with that and then when you show them oh look but in fortnight They're generating like almost a billion every two years and then it's like oh maybe it's an interesting market so yeah, it was a challenging let's say Explanation of of how it's growing but I think we have a great momentum now with the generative AI and also the metaverse idea of everyone is convinced that we will be Living in virtual worlds not necessarily with VR but Experience experiencing Avatar based identity cross-platforms. It can be through our stickers on what's up or through our Avatar in roblox, but we will have a virtual identity and it's something that is already here today and it's growing The second trend is like ugc People will be customizing more and more their identity and their assets and They will be able to bring those in the games and is something that is growing as well And the last part is of course AI will be the facilitator to all of this And is something that is growing more and more and the quality level that we reach in text and anime and video Image and soon video and then 3d Is comforting the point you say that in few years the quality will be maybe The the same as Pixar and three years ago. I was doubting it now. I'm like maybe like look what we reach in terms of image and video Maybe we'll be able to generate animation that is reaching the Pixar quality in five years from now And that's very exciting because everyone will be able to tell amazing stories and share amazing content And yeah, I think that's very exciting to live in these times and you're talking about
Jon Radoff: What I think is the right way to define and think of Metaverse right so Over the last couple of years it's had all these different definitions But we have the Metaverse. World blocks is the Metaverse fortnight is the Metaverse We've got hundreds of millions of people interacting in the Metaverse if we think of it as virtual worlds and virtual spaces that you can go into and project yourself and then express yourself creatively I think it's that creative aspect that's really core To Metaverse and we might have more and more real time and more and more augmented and virtual reality interfaces to these systems But it's not about the interface. This is just my own opinion the way I describe it It's not the interface it could be on any kind of screen Metaverse is more of a state of being it's about becoming digital humans and being creative in these spaces and you're playing Right at the middle of that How do you think this notion of Metaverse is going to evolve over the next couple of years and and It's kind of become almost this toxic term over the last couple of years for a variety of reasons But is it going to come back now? I saw Tim Sweeney posting the other day about how Hey, look there's like 600 million people in the Metaverse and and rumors of its death has been greatly exaggerated What's happening here both with I guess the word but more important than the word what's happening in the world
Yassine Tahi: I think it's all come back to the hype cycles and media I mean like it's all the best world everyone talk about it everyone is excited and everyone defines it The way they want But if we come back to the usage where people are spending time actually when I was playing lineage two And I was like in team speak with all my friends and spending time not only playing but spending time together and having fun and Socializing it was already part of the What we called metaverse I think we're getting there slowly and If we like take the full definition it's taking time and it will all converge and no web3 is not dead no metaverse is not dead is just all this kind of technologies are combining themselves To enable use cases and adoption is getting more and more and it's slowly increasing And yeah as you said there are already 600 million people playing and these people guess what are the younger generations? So they're here to stay so I think hype cycles and media is kind of sometimes misleading in terms of what's happening And we I think we've been brainwashed with this idea of yeah the Metaverse is us in a VR and it's like ready Player one and I don't think that's the my vision is not my vision of the of the metaverse so I think As you said before virtual world will be socializing creating exchanging creating economies and where There will be interactions and economies. This is something that already happening and it will be increasing more and more It can happen on a phone or on a our device if the air device are good enough So that it's become natural for people to use that and if not it's okay the idea of the metaverse is still there It's not depending on a specific device. It's depending on the usage that people have
Jon Radoff: That's already already there and in growing so the metaverse whether it's lineage Fortnite Roblox sandbox Minecraft like all of these are manifestations of projecting ourselves in the digital space But more importantly our expressiveness our creativity In virtual space and core to that is democratizing the ability to shape these spaces Which is what you're doing? So you're making it possible to go from the few hundred thousand People that can create animations to maybe that'll be a few hundred million people Sometime down the road. That's what the Metaverse is going to be all right So yes, scene this has been really fun this problem that you're working on is super complex super complicated And I have a much deeper appreciation for the challenges of animation systems with characters and Prompting them to existence through all of this conversation. So thank you so much for joining us for this episode and talking about it I definitely encourage everybody to check out some of the other links to your company and some of the research that we mentioned along the way here But you see and thank you so much for being on the episode. Thank you so much