AI and the Search for Truth and Answers | Aravind Srinivas | Perplexity.ai

Originally Broadcast: July 20, 2023

Perplexity allows you to ask questions and receive answers supported by citations from online sources: Wikipedia-meets-AI. I spoke with Aravind Srinivas, founder and CEO of Perplexity.ai, the company behind it. They use a combination of GPT, good UI/UX and a lot of behind the-scenes magic to equip you with an artificial intelligence that can make you more informed, smarter--and maybe even more compassionate.

00:00 Introduction
02:56 Product Demo
08:20 How Perplexity will Win
11:50 Wikipedia meets AI
14:14 Handling Controversial Topics
14:41 Return of Nuance?
22:22 LLM Technologies Used
27:22 Alignment to Ourselves
31:08 Does Writing Matter Anymore?
35:57 AI Copilots for Life


Aravind Srinivas: Imagine basically TQL and human labor of a journalist or an academic scholar or a person writing Wikipedia pages being done for you in matter of seconds instead of ours and at your demand so that you can keep asking whatever you want completely personalized to you knows what you want and what you don't want and available for free that's basically what exists today and that is flexibility of AI so that's what we're building we calling it answer engines by virtue of focusing on this one thing which is boring but incredibly incredibly useful we nailed it

Jon Radoff: I am with Erivans Renevass the CEO and founder of Proplexity AI we're here to talk about the future of search the future of large language models and generative AI Arvind welcome to the

Aravind Srinivas: program why don't we kick things off and tell us a little bit about what you see as the future

Jon Radoff: for both of those things search generative AI and how do those things work together thanks for

Aravind Srinivas: having me here John future of search will not be links we used to go to the libraries to learn about anything we wanted we used to manually pick the books organized by the title first letter of the title pick it up go to the bibliography the final organized set of keywords and then do all the searches ourselves and then we figured all this was a necessary manual labor and we can just directly google it but then the idea was you would google it it would pull up the links and you would click on a link and you would still go through and sift through the links and read different links and come to the answers that you wanted now whatever happened to libraries that google did to libraries is happening now to google itself which is you don't have to sift through links you don't have to open many tabs and read through the pages we directly get an answer instead of links or google sometimes gives you answers which is extracted parts of the top link problem those it is not robust to SEO and so often it ends up being more annoying than useful now for the first time in two decades you have this amazing reasoning engine almost like the steam engine of intelligence that's the best way to understand it called large language models llms also refer to the generative AI and now what they can do is do all this manual labor that you're doing now which is entering the keyword on google opening the links reading through them and getting the answer if somebody does that for you and gives you the final summarized paragraphs or paragraph with the property references so that you can trust what it says imagine basically equal and human labor of a journalist or an academic scholar or a person writing Wikipedia pages being done for you in matter of seconds instead of ours and at your demand so that you can keep asking whatever you want completely personalized to you knows what you want and what you don't want and available for free that's basically what exists today and that is flexi.ni so that's what we're building we calling it answer engines conversational answer engines the fastest way to get answers to any question and the fastest web research companion that you can have an expert researcher at your demand in your pocket all the time so that is just one application of generative AI though this is a ton of other applications being your marketing copilot being your sales copilot being your home writing copilot like if you don't know how to write poems and you have a lot with one and you want to impress them on their day on a date or like you want to like be sweet to them on your anniversary and you're trying to write such a personalized poem that only you both have shared context to but you don't have the skills of a poet or you're trying brainstorm ideas for a gift to give or surprise date nights or an expert tutor for your kid you don't have the time you want them to be really smart but you don't have the time to dedicate or they're asking and asking you all these questions of like why is the sky blue like you know and you're like you know stuff but you don't exactly know how to like explain it to a seven year old and then you can quickly pull this up as a copilot that can help you be be a better dad so many applications that you know versus never possible before this reminds us all of Steve Jobs's code right which is don't give you what they are asking for give them what they don't even know they want yet and then once they have it they just absolutely want it so that's what generative AI is it's not something people ask for but now that they have had access to it they're like oh my god my I don't want to go back to the old tools I don't want to go back to writing every Google Doc myself I don't want to go back to writing every line of code myself I don't want to go back to writing my speech myself or I don't want to go back to like clicking on links again and this is just the beginning so there's a trillions of dollars of economic value uh going to be created because every human our labor has a certain intrinsic value so it's basically amazing it's just you're replacing a lot of manual work and that itself already has a current dollar value but because of that every human becomes a lot more empowered and they can do a lot more and less time and so the value of human labor per time also goes up and so that way we create incredible economic value that was never possible before even even bigger than the internet or even bigger than the mobile phone and it's exciting times it's just beginning we should all be part of this revolution there's so

Jon Radoff: many things to unpack there we're gonna try to unpack that into sections now and drill into them but uh first of all let's begin on this whole idea of links actually because as a person who started playing around with some of these chatbot systems built on language models even before kind of the whole explosion of chat gpt like it was very apparent that if you get a result you immediately have to go and verify all the data that you're actually receiving because these things hallucinate um like and i and i see that with every language model like i used anthropic recently to to help my team win this um hackathon in l.a. for tech week and anthropic was great for that because we needed it to kind of help create fantasy world playing game kind of scenarios and hallucination as one of my previous guest Hillary Mason said is a huge advantage if you just wanted to make up lots and lots of fiction it's problematic where if you want it to give you like accurate data about things like during the course of this hackathon i was really i really became interested and well how do we optimize the whole chat process because it's this it's actually for people that aren't aware it's it's typically done in this very inefficient way like you just keep concatenating chat on top of chat and you end up having this massive prompt in the background that you pass off to it's like it's got to be a better way to do this so i asked um both chat gpt and clawed like okay well what are the optimizations and they basically completely hallucinated things that sounded amazing when i read them and then when i searched online like the papers that they were referencing made none of the claims that it was claiming which brings us to perplexity i believe because it seems like that's the problem that you actually saw and seems like you're solving because you're giving me cited research cited web pages to support the search results so in essence you're saving me all this added new labor although the language model interaction is really interesting you're saving me the extra labor involved in going and like doing all this research to see if the results it gave me are

Aravind Srinivas: even accurate whatsoever so first of all is that is that a is that a decent take on how you spotted

Jon Radoff: the problem here and i'm just curious like how are you going about that why are you able to do that why why aren't hard and chat gpt etc like why aren't they already doing that we obsessively focus on

Aravind Srinivas: this and nothing else so that's why we are really good at this one thing right um chat gpt is a bigger black farm than perplexity note up you can you can look at the traffic i think as of today perplexity has like 20 million visits among and barred has like 130 million maybe and chat gpt has like 1.3 billion us six x bigger is barred 60 x bigger is chat gpt that's the current state now why are why are barred and chat gpt not able to do what we're doing so i have very simple explanations for that chat gpt's biggest reason for growth was the thing you mentioned people love the helles and nations helles and nations actually have a market value like um but we were like okay that's already a cornered market there is one big leader there we may not have the virality and the amazingness that people feel when okay so here's the thing if you have a friend who's always correct always always factful and correct versus a friend who's like saying whatever they want and like sometimes they're saying like absolutely ridiculous things who do you invite to the party

Aravind Srinivas: i would bring them both that would be a fun party fair enough but what i meant to say was like

Aravind Srinivas: there perplexity is personas more like this uh professor was like very scholarly and chargebooties personas more like this really smart kid who knows a lot but not always correct but it's like a like a know it all but say things in an entertaining way there's a reason that the second is more like like the high-level attraction and obviously our brand and open the eyes brand pretty different by virtue of focusing on this one thing which is boring but incredibly

Aravind Srinivas: incredibly useful we nailed it and when they tried to do the same thing which is chargebootie has

Aravind Srinivas: browser plugin power webbing it's very clunky and doesn't exactly work when they have a big product and they're trying to do everything now it's sort of like very hard to compare compete with another company that's just like allocating hundreds of hours on this one single thing and and for bar the answer is like they have the best search index in the world which is google but it's amazing that they went after Chan Gpt and Sarabas and that tells you that those are reactive product more than so they went they were like okay like there's this Google search thing people come you have to search and as Chan Gpt somebody else has and we need to have something of our own that's competing with it and that's sort of bar it's reason why it's even there so that's why it's not able to do what we are doing it doesn't have citations it makes up stuff it's not as fast and we are very happy in the position we are in because we can just like focus on nailing this one problem which is having so many scopes to innovate here like the way you render the answer the way you kind of like make even more conflicts research is easy improve the latency even more I also notice with

Jon Radoff: perplexity that the information that it's able to respond to also includes current information I think like the Chatt Gpt cutoff is September or 2021 or something but I was able to enter current events and it was able to give me intelligent responses there as well kit can we drill down a little bit more into the technology and how you've approached it so how are you identifying citations to the content of the answers that you supply and what you've described as an answer engine and also how are you keeping it current by continuously looking across the internet to pull in these additional new sources and then I guess the third part of it is also just how do you verify that the sources you're pulling in are accurate for the questions being asked yeah so the obviously a lot

Aravind Srinivas: goes behind the scenes to give you the experience that you have this is amazing and in some senses like playing the orchestra person playing the keyboard and then the person playing the violin and like you're playing the orchestra like and these are like different parts like query the search the links the content within the links and the reasoning engine the LLAM that takes all the content and the process system and uh rewrites them in a format consumed by you adding the citations the appropriate paragraphs so all that is like basically the orchestra how do we do this we basically create a persona of Chatt Gpt that is meant to be like a person writing a Wikipedia article if you write a Wikipedia article you have to cite everything you say now what if you make an LLAM

Aravind Srinivas: have that persona where it only says stuff it can cite so every sentence in perplexity has citations

Aravind Srinivas: for that precise reason that if you wanted to make a Chatt Gpt like system only say truthful things what is the measure of truth like how can you inject that personality one way to do that is to just ask it to behave like a Wikipedia content writer or a journalist or an academic scholar and so that's the core idea only says stuff you can cite you know it's kind of like interesting citations where the core founding story of Google because Larry Page was inspired by the academic citation graph of like important papers are those that get cited by a lot of other papers and that idea was mapped to the web saying important web pages are those that get cited backlinked the web pages and that led to Page rank the founders of our perplexity three of us out PhDs and the citations thing we just obsessed over it we know in and out of our works how can we bring that nature to these large language models the output of that is what you see in the product one of the things

Jon Radoff: I'm really curious about is how different sources could be weighed and got operated into a result so I'll just give you an example that I that I played around with a little bit on perplexity and at the risk of treading into controversial search topics I'm not going to take a position at it I'm just going to tell what perplexity so I said like was covid caused by a lab leak so it came back with it it wasn't like certain or anything but it it gave me a result that was essentially well there's some evidence that it may have been caused by that and and here's what the evidence is which was really cool because I could look at what the sources are if I if I recrafted my prompt a little bit and I said what is the evidence against a lab leak it came back with a response that was more like well all the evidence is that it is from a biological or a zoonotic origin and that it that's probably what it is but here's some evidence from from some of these things it seems like there could be like a whole new version of the future of SEO that will take place in this so search engine optimization where it'll become kind of like chatbot optimization like convincing chatbots that you are the authoritative source to kind of protect your own world view and both of these sides obviously there's a scientific basis for some of them there's also politics involved and people wanting to believe in one versus another for whatever you know biases they happen to have that seems like a super complicated problem to unravel in search meets LM how do you think about these kind of complexities and and how do you weigh different sources of evidence

Aravind Srinivas: I kind of always go back to Wikipedia because that's sort of like what are we doing we're basically dynamic personalized Wikipedia pages on the slide right so how do we compete a content writers handle this like you can write a Wikipedia article that size anything right and you can write you can cite the tweet you can cite the blog posts but they don't do that the moderators only allow you to cite platforms that are of high reputation scores notable sources and notable sources yeah notable sources then that gets to the question of what is notable and like you can have some rough heuristics for this like one measure of notability is like there a lot of traffic to it if something is completely fake then obviously even though it might be entertaining in the short run if it's just going to keep spreading fake news people are not going to use it okay take true social right I mean you know like whether it's truthful or not but it doesn't have asmospheric right I think you can't just spew fake news and how website that has high traffic every day so that's also kind of why like New York Times or Financial Times or Wall Street Journal has a lot of traffic has a lot of readers so people can have trust what it says and if these newspapers do a bad job at it then they're going to lose viewership readership so that's the sort of inherent social structure around like having enough checks in place to say truthful things and then that can be used as a heuristic of notability like some measure of page rank and and then that can be used as a measure of like prioritizing which domains aside if there are domains that are frequently cited by Wikipedia then those are good domains and NLLM's can bootstrap from that because it's already work wedded by humans so there's like ways to get around this and like we you know like we we are thinking about these things and then eventually beyond a point of time we'll be able to build up sufficiently good trust core page rank of the web trust map of the web and then we should be able to use that to improve the search quality even more it's sort of like writing a good paper good research paper a good research paper eventually gets cited by other good research papers over time once it gets cited by other good research papers the information propagates even if your website may not get directly cited because that information is broadcasted in other websites because what you said is fundamentally true it'll have a way to make it to search for results so it's kind of interesting it's even more powerful than quick streams and quick logs where people just who can gamify the quick process get their links displayed higher but I think LLM's are sort of like a dead blow to them in some sense like you really have to produce good content and it's kind of like giving power back to the genuine content creators who basically made the internet you know that they were the reasons for the internet to actually become big and then it got taken over by all the spammers the OG uses the internet for all like

Jon Radoff: genuinely interesting people yeah and it seems like one of the opportunities for language models and applications driven by language models is to restore a little bit of nuance to some of these conversations because so much when you go down the rabbit hole of a lot of these online conversations or arguments that are going on there's so much politics and embedded bias within some of these conversations that you can kind of get trapped within these echo chambers and comfortable use that there aren't even any variant viewpoints of something like going further down this lab leak example because I think it's just an interesting one where there actually is there does seem to be interesting scientific evidence kind of on both sides of this now and it's it remains although it remains inconclusive it's interesting because you can do a search in this which can present you with both sides of this so did this other feature and perplexity where you can create a thread and I ask it for the evidence I really like this idea of suggesting related follow up questions so that you can kind of like intelligently navigate that content because one of its follow ups was well what is the evidence for and against the lab leak

Aravind Srinivas: hypothesis so I clicked on that it gave me pretty good follow up where it organized things and it's

Jon Radoff: said okay here's the arguments for here are the arguments against everything was cited it seems like this this could help people a lot in not only understanding their own viewpoint like I feel like people go online to just sort of collect more evidence to to reinforce what they already believe but if we could actually get people to understand other people's views and why they came to believe that and not ignore that that that could be super powerful and and I see that potential here I see that not only here in perplexity but in language models in general like like it can surface information into this more synthesized format yeah it's different than go to google get a get some web webpage with its bias inserted all over it in a way that's not apparent to you what are your thoughts on this idea of nuance and incorporating multiple views yeah I mean that's that's sort of

Aravind Srinivas: our purpose you know demands is hard especially doing it for humans demands is actually pretty hard this job that's being performed by perplexity of giving you this demands and detailed viewpines with appropriate citations and surface in content from multiple different views in the most consumable way that job for a human to perform is pretty hard and that job has intrinsic value for other humans that we are like already pretty clear of what value we're adding to human society yeah the covid lab leak because I think in fact it was one of my test queries in the beginning to when I was trying to like stress test our product and many other examples to be fair we can improve on something I think like right right now the LLM's have a clear bias I'm not going to say to us with side people know that so I think that can be improved with the base model itself and I think opening eyes working on that and we can also do some work there yeah there's more work to do and making sure that your bots are like having no opinions of their own and are not like our readers of truth but they're more like helping you seek truth so be the maximally true seeking knowledge seeking platform on the internet that's our mission basically I'm curious about what you

Jon Radoff: can tell us about the LLM that's powering this if I recall back six months ago and we first met I think yeah told me it was gpt3 or something was the right we still use yeah major majority

Aravind Srinivas: of the heavy lifting is still being done by GP3.5 and 4 but we have begun rolling out our own LLM's and I think we are pretty excited about doing more of that I would just say that LLM's are becoming commoditized and more like parts that you buy to build amazing cars of course the parts will be expensive and the one with the best engine will win but our purpose is sort of to build the best product experience and make sure that users can get the best answers on one side and so until a point where our LLM's are like as good as opening eyes like we don't need to wait till that moment happens for us to already serve our users right I believe the cost of LLM's and the price you pay all are going to go down and the capabilities are going to advance that will be a great future to live in where we can all like use multiple different data big like it's basically like using different versions of sequel or something like that I think I think it's going to become more like few suppliers exist and like a lot of people use that and like build amazing products that last many

Jon Radoff: years what's your thoughts on these open source language models like LLM and Falcon 40 and some of them seem surprisingly competent for given the hardware that you can run them on like what is the trajectory of that and also your thoughts on decentralized open source versus like centralized behind an API gateway kind of language model systems I think it's great that LLM elite it's kind of

Aravind Srinivas: like the lab leak you know I think what it did in terms of people taking it and making it really fast on MacBooks and like trying to create a world where people can tinker and have their own LLM's that they kind of completely control it's amazing that gets into the decentralization part of it right where like if John wanted to have his own LLM that represented him and always work for him would he trust Sam Aldman with his data or like would he trust himself and like you trust himself as long as he can do all the training and deployment himself and then that's the question of like if you have the tools and if everybody becomes a programmer and that's amazing and if a centralized LLM like GPT4 helps you write the code for maintaining your decentralized LLM and training on it and doing the deployment on it that's a great world to live in right like you have some things that are only personal to you that you don't share with anybody else and you have the whole stack on your laptop and there are some things that you're okay like you know the queries if whether it comes from John the people know that it doesn't matter as long as it helps you get the job done and that's also great and I think we live in a world where both are true and there are like some few tinkers who might want to play with their own things and there are some people who just want to use the centralized things and I think their ideal world would be like if one helps people become better and dealing with the other like I like I don't think everybody has this goes to go understand what LLM or CPP is how to like install it on their MacBook what is even terminal but if you can learn all that with GPT4 Powered System and then take that and like train it on all your emails and chat and like make it really behave like you or you could train it on someone you really admire and like you could talk to them and you're paying for your the time and the computer takes to get that but the value get out of it is so much higher that you're okay with it and there may be a world also where like you could these these become like lifetime subscriptions where someone else does the whole packaging for you like like you would give them your all your data and like they would train them out separately for you and package it all into like one exact executable and like you buy that executable and run on your MacBook and all your data is immediately deleted from them you might have to pay like 100,000 bucks for that or like take some 10,000 bucks for that just like how you buy an iPhone or MacBook and then you use it forever like that's an interesting company to build right I haven't thought me I just came with idea now like let's say you John like yeah you don't know how to do the model training but you go to a friend of yours who knows how to do it and it's almost like a foundry right they go to the they do the manufacturing for you and maybe there are some devices that everybody might want like an AI Steve Jobs this person doesn't need to make one device this was John like once he makes it he can tape it and like use it for anybody and then sell it for like a few thousand bucks so that'd be cool I can see a world where we have this lot of these individual lamps and there's also world where the central of them is just so smart and replacing human knowledge work and both could coexist really nicely my goal is to is to clone the

Jon Radoff: neural networks inside my brain into some silicon formats that I can replicate myself a lot of times and just because I gave this talk at MIT a few weeks back and the point I made is that we're going from you know the online world being a place where first it was just about connecting and being there and then it became about creativity and making things online which is sort of like online games and blogs and all the various ways you can express yourself but the next generation here may need more about sending our will online so if I could train like a language model or some kind of AI that replicates the things I care about approaches problems the way I do it could go and do work for me it could come back and share things with me that it learned to make me smarter so that I can re-incorporate all this knowledge back into my biological neural network so I'm excited about that future that said you pointed out one thing that people are afraid of which is the potential to replace some forms of knowledge work maybe all forms of knowledge work some people are super afraid about all of this stuff I don't really want to get into the whole existential risk type things but maybe to bring it into something that might feel a little bit closer what are your thoughts on like the economic period we're about to enter into at the very beginning of our conversation you talked about how the value of a unit of time can start to go up a lot but getting there might also be really messy so just you know thoughts about the pathway the roadmap into the future of these

Aravind Srinivas: technologies what are your thoughts so I think everybody will feel like a chief executive officer of their own company and everyone will feel like they're a growing stock where they get a lot more done for unit time I mean why are we all employed right like we are all employed because someone else is achieving something much bigger than us and we're being part of that now what if like you can just run your own business what do you actually need like you need to code a little bit not so much and you need to do some marketing and do some sales and you define users and if AI helps you do all this with just like you yourself or one other person at best instead of building a 10-100 people company that's pretty amazing like you're gonna and like and if these employees are not even humans and like you're you're you're you're you could even have AI co-piles work for you and just one person overseeing all that that's even more amazing like it's you don't have to like you know be as stressed as a star of founders the day or like like five years ago they're not feeling of having personal assistance expert tutors expert marketers like basically having access to the world's experts information is well-thread John like you just can make way better decisions with better more data and if people will go wrong less like like less often people will be right more often and then people will be able to do a lot more things people will feel more empowered is that a bad future I think I think not like but but they should learn how to capitalize on it just like how people capital who capitalize on internet or phones before others got more economic value out of it before the rest of the population the long tail came in that power law exists for everything and I think that'll exist here as well except that the only worry I have is that power law here will be more extremely picky and long tail like the ones who are very early in on AI might get a lot of benefit compared to the ones coming later on but the ones coming later on also are going to have a great world to live in that sort of the world I imagine like people will be more independent there will be a lot of smaller startups that are building our companies there'll be a lot of economic value created no one people on company other than like you know the company that has the best largest language model will be controlling the whole field and there's like a lot of space in

Jon Radoff: where other people will grow so you you just mentioned expert tutoring which I think is a really interesting application of of this technology so I actually just had a conversation with my son the other day he's about to turn 12 and we homeschool our kids is talking to him about the skill of writing and he made a point to me of like well I don't why do I need to learn to write anymore I'll just be able to go to chat GPT or some version of that which will get better and better why is why is writing even a useful ability and so I said well okay you you do have a point like I think that these tools will be able to help you in an incredible amount in the future my thought is like we're not that close to it helping you articulate your ideas that are unique to you and compose them together it may get there at some point it's not quite there yet but it also led me to think about education and learning and the process of as you put it having a co-pilot there for you as you learn and write and kind of explore what it is that you actually think about things so I gave you a lot to work with there but I'm curious about your thoughts in how this impacts the whole future of education yeah obviously you spent a lot of time in education you have a PhD so like yeah you've you've gone through every aspect of that but how is that going to change in the future will PhDs be something that someone get in the future if there's all these AI technologies available that will have immediate access to expert knowledge of all kinds I'm trying to understand

Aravind Srinivas: this better myself so when I was I mean I was a pretty good student like you know I did like the there's this thing called IITs in India which is pretty hard to get into and better trained for for multiple years so I did all that all of the ads my sense is that if a map encoding just because chantypities are good at it doesn't mean you shouldn't try to be good at yourself I think it's just going to help you to be even better basically there are many times when I just didn't know like how to like understand a concept like let's say I was doing physics assignments and I was just not understanding like some electromagnets and principle and if I can have access to an AI that can explain that to me really well and I can bug that AI hundreds of times until I truly understand it which know even you as a dad John might not have the patience to explain to your 12 year old like some concept about Faraday's laws like you know hundreds of times you might be like hey man I told you like enough like you know whatever you don't get it whatever you know but AI is not going to say that it's going to like okay okay get sure you know here's the thing yeah yeah I get it here and so that I would find it amazing now is for writing itself there is this thing about like AI right to AI written content right now that feels very like mid you know not really insightful to read it's it's great to read for you know like like like like a legal language that you might want to write or if you're trying to write like a newsletter things like that it's kind of okay but if you're just trying to write an email that sort of person will touch to it doesn't exist so my sense is that maybe I'll drop on my own personal experience here but when I used to be in school I used to be really good at every subject other than English now that I was bad at English I was scoring pretty well but I was each subject had the top and I was I would always just like top every subject but English I would not be the top I would be like the second and I would be like hey why why is that I'm not a do and I would go and ask the teacher why is it that I'm not the top here and like you know like I'm I'm doing well and she'd be like you know you're there be a part of the exam where you have to write your own stuff the the writing section of the exam where you'll be an imaginary situation and you've got to write an essay or articles and she'd be like your your things sounds like how I would read in like a example essay it doesn't have imaginational creativity and and the top or English they would be more creative so that's something I would say is amplified even more in the era of chativity where the basic thing is already really good and that's chativity what if John heard in this I want to know that where is your personality so the the need to have your own personality and the need to exhibit it is even more now in the era of chativity within content basically like if you see a steep job say something there's okay here another thing I'm in my own office a measure of a smart person is like you shouldn't be able to predict what they say like if you ask them a question if a sign of a smart person is you shouldn't be able to 100% for you you could have a like okay 70% chance like he's going to be pretty there's always this variance in them that keeps them interesting otherwise they're boring you can model them perfectly and you can have the conversation with them with a chatbot of them right um and I think that's kind of going to be true for kids from now like the the interesting ones are those who

Jon Radoff: can say stuff that chatgbd cannot say I'm thinking of this whole feedback loop that could emerge though which is we have copat we have a i copiolets who will become increasingly trained on ourselves not just sort of bringing like a chatgbd maybe it'll help us figure out who we are

Aravind Srinivas: yeah it'll help us figure out who we are and we'll all become even more creative and we'll all become even more like we'll all wonder about parts of the universe a part of our day-to-day society that hasn't already been understood well enough we focus our attention on that and like it'll drive human society more forward and then that data can then be adjusted by the next generation of LLM's and then after that we focus more on the next set of things and I feel like it's just the best way for human IQ like like the joint IQ of human plus a copilot is much higher than human without a copilot right and not saying the code like basically an LLM is not smarter than the smartest human on the planet right now it is not I'm not worried about those existential risks I'm just amazed at like the possibility of like human plus LLM together working way better than the human without LLM it's like day and night difference and I think that if it's amplified at a scale of our population it's just going to be amazing like like we it's it's the deny since moment right like we all have access to truth now in a much more high-band with fashion

Jon Radoff: than ever before yeah and it's not and it's also important to see in that that it isn't just individual humans with the AI copilot it's the whole network of humanity and the fact that individuals have been elevated in that way then working with each other so it's

Aravind Srinivas: a network intelligence and people coexist we'll coexist like you and your son and then for Black City and like character AI like before or before if you can be in a chat room and talk to each other right it'd be amazing and like that's those are experiences that we would only like have we wouldn't even have dreamed of before and I think this is all going to be like you you'll take

Jon Radoff: it for granted I think it's a beautiful dream and one that I share so let's dream a little bit more for a minute like first of all what are you excited about over the next 12 months is everything is moving so so fast in this whole market you know both did market generally as well as perplexity like what's on your road map what how is the world going to look different even just a

Aravind Srinivas: year from now I think I just want everybody to have access to the smartest person that they can talk to for learning about anything and everything and help them do any research that they want to do and minimizing any barrier to using this prop is sort of a assistant for everybody that's the next 12 months at least for me get it in the hands of more people and we can give them out to be smarter make everyone smarter if people are smarter their life shall just be better and their life shall just have better they'll have a higher quality of life and their higher quality of life they'll

Jon Radoff: be happier making the world a better place let's double click into that so a year from now to years from now billions of people are smarter how is the world going to be different they'll just

Aravind Srinivas: make better life choices they'll have more time they'll spend more time in a more wise way they won't base time doing mundane things they'll dig deeper into things they'll be more thoughtful about what they say they'll have more compassion they'll learn how to be more compassionate using an AI they'll know exactly what to do in their lives they'll just nail their jobs I'm not saying everyone's going to do this but at least a few people will start doing this and that'll spread eventually to the

Jon Radoff: rest of the society that would be exciting it's it's the world I want it's the one I'm working on that's why I use AI and that's why my company is incorporating AI and what I mean I have a lot

Aravind Srinivas: of respect for like how fast you are into the new things and you know like you you've been talking about generative AI for like probably longer than anybody else I'm a rare also well my life

Jon Radoff: mission is is helping people be more creative and the media of our time is games so the idea is how do you just help people craft experiences and online environments around everything from spatial computing to massively online games like that is the expressive media that exists now that's much bigger than using it can movies and so many other things so my mission really is for any form of creativity just help people be more creative like you know my big transformative goal for the world would be like what happens if you could 10x creativity in the world what if we went to like a billion highly creative people in the world whether creativity means media and content and storytelling or science or making things of value to society and civilization so that's what this stuff is going to do and it's super exciting and I encourage everybody to a check out perplexity.ai go try it if you haven't yet chances are you haven't tried it yet for the for the numbers we gave in the beginning you've probably tried chat GPT but you should definitely try out perplexity because I think it gives you really interesting answers and it'll lead to your ability to like look into the information behind the results you're getting and and kind of speed up accelerate your own learning process which I think is an incredible thing is if you can learn faster you can innovate faster you can iterate faster you can make better things for the world. Yep. So, Ayrvin thank you so much for joining me in this conversation everyone check it out and we'll put some links in the show notes as well so you can go and find things. All right thank you John. Thank you.