
ChainStories
ChainStories
We are dedicated to empowering the next generation of innovators, as we spotlight the brightest young founders and game-changing projects, sharing stories of how they’re reshaping industries and pushing boundaries.
Join us as we provide insights, resources, and market strategies that bring visionary ideas to life.
🐦 Twitter/X: https://x.com/BlockchainEdu
🎙️ LinkedIn: https://linkedin.com/cryptoniooo
ChainStories
"How I built a multi million dollar startup in 24 hours" - Interview with CEO of YesNoError
In this episode, Matt shares the incredible story of how he built YesNoError, an AI and crypto-powered platform, in just 24 hours—and raised over $150M in volume on its first day! 🚀
A self-taught entrepreneur, Matt skipped the traditional college route and went straight to Silicon Valley, where he connected with giants like Sam Altman and Brian Armstrong. His unconventional path didn’t stop him from launching successful companies and making waves in the AI, crypto, and Web3 spaces.
YesNoError has already audited over 10,000 research papers, uncovering 100+ errors, and has gained over 9,000 users on day one. With plans to process 1 million papers, the platform is creating a huge impact in the scientific research community.
If you’re fascinated by the future of AI, crypto, and scientific validation, this episode is packed with insights you won’t want to miss.
🎙️ THE ChainStories Podcast – powered by Dropout Capital and the Blockchain Education Network!
🐦 Twitter/X: https://x.com/BlockchainEdu
👤 Guest on LinkedIn: https://www.linkedin.com/in/mattschlicht/
🎙️ Host on LinkedIn: https://www.linkedin.com/in/cryptoniooo/
-----------------------
What is BEN?
The Blockchain Education Network (BEN) is the largest and longest-running network of blockchain students, professors, and alumni around the world! We are on a journey to spur blockchain adoption by empowering our leaders to bring blockchain to their communities!
https://learn.blockchainedu.org/
-----------------------
Thank you for listening!
Want To Start Your Own Podcast? We Use BuzzSprout:
https://www.buzzsprout.com/
Welcome to the Chain Stories podcast, the podcast that celebrates disruptors who defy convention. Here, we dive into the bold stories of trailblazers who turned audacious ideas into billion dollar ventures.
Audio Only - All Participants:Welcome to the ChaninStories podcast. Today, I have a pioneer, someone that is working in the forefront of AI. Actually, something that I did not know that existed before and just came into my eyes. The founder of YesNoError. One of the first autonomous DeSci AI agents. Matt, it's a pleasure to have you in the show. And for our listeners, would you like to go and take a deep dive and intro yourself? Yeah, thank you so much for having me here. Happy to give you a quick background on myself. So I'm Matt Schlicht. I grew up in Southern California and I didn't go to college. Instead of going to college, I moved directly to Silicon Valley back in 2007, and I dove into the world of venture backed startups. I joined a company called Ustream, which was one of the first companies pioneering live video. And convince them that they should let me run all of product. Ustream ended up four years later getting acquired by IBM for over$150 million. And I got super lucky to get educated like in the streets instead of college. And when I was there, I got mentored by this amazing guy, Josh Elliman, who, he launched the Facebook API at Facebook, he grew Twitter. He invested in Musical.ly, which became TikTok. Just like an expert product person in Silicon Valley. I took a company through Icominator, same class that Brian Armstrong was in, the founder of Coinbase. He was the only person there. Forbes 30 under 30 twice whatever that is worth. And then I've taught myself to code over my journey. And I have been building in artificial intelligence since 2016 when open AI came out with their API, the first private version, I was one of the first, I don't know, a couple hundred people that had access to that in 2020 and have been on the forefront of building AI and autonomous agents. Since then, and most recently, I launched YesNoError, a AI agent that goes through all scientific research that has ever been published and uses the most advanced AI models to detect errors in these papers. And we've already found many errors and contacted the authors of the papers who confirmed that those errors were real and went on to go fix their papers. And we're on a mission to do that through all scientific research that's out there. That's quite a journey. And going back in time and a lot of our listeners might face this situation. Did you thought of for a second or even did your parents thought or told you like you need to go to college? Do you have to go and finish this out? You can't drop out or was this a self debate? What made you decide, okay, I'm going to go on my own and try. Yeah, so I don't have anything against college. I definitely wanted to go to a great college. The high school that I went to it was a very good high school. It's called Sage Hill. It's in Newport Beach. I didn't have a lot of money. Everybody else hadn't, around me had a lot of money. And what was interesting is the school I went to before that, it's called Waldorf. Anyone can look it up if they're interested in it. One of the pillars of Waldorf is that you don't use technology. So I didn't have a computer. I wasn't playing video games. I wasn't watching TV, but when I got to high school, I suddenly was like flooded with all this technology. And I was like, Oh my God, like a thousand people work at Google. This is like super crazy. And I got my first laptop. And so in high school, I didn't do any homework. All day long, all night long, all I did was build things on the internet, just super fascinated that you could go on there, nobody cared how old you were, you could just invent anything that you wanted. And so junior year of high school, I actually ended up getting kicked out. They asked me not to come back for senior year, which was obviously super upsetting. And so over summer, I actually got access to the school's email system, and I hacked into it, and I found like an error in their system, and I told them about it, so I didn't exploit it or anything, I said, hey, I actually found an issue in your email system I can access everybody's emails I think you should probably fix that, and because of that, they actually asked me to come back and finish out senior year, which was awesome. My grades weren't super good, so I didn't get into any like great colleges. And so the option was like, do I go to a not great college or should I just fight my way in to Silicon Valley? And that's what I did in two years, two or three years later, I actually went back and did the commencement speech for the high school. Fascinating. And so pretty much you taught yourself how to code during high school. And you had swamped with these materials in this world of digital economy, digital information, and you just took a straight dive into it. Was it your dream to one day go to Silicon Valley and build your startup? How did this start within you? As I grew up, I always wanted to be an inventor and I didn't know what that meant. I just wanted to create things. And what I discovered in high school, when I got access to the internet, this was like when Facebook was coming out. I remember being one of the first people who got like a Gmail account and you like had three email invites and you could send it to other people. What's fascinating about the internet is it's not a physical place. Anybody can grab a street corner and get access to every single person who's on the internet. And so if you can go build something digitally, you have the potential of the entire world, or at least anyone who's online, to interact with that thing you made. And so in high school, that just, if I was going to invent something, I was going to invent something that was digital, that was on the internet. And ever since then, yeah, my dream became, I'm going to go to Silicon Valley. And so the moment I graduated high school, I got a job, I worked my way in there. Started as an intern, moved up to business development, and then moved up to leading all of product. And yeah, that was the dream. And then ever since then, I've just been working with people inside of Silicon Valley. Working with investors and just staying on the forefront of technology. At first it was live stream video, then it was social networks, and then it was crypto for a little bit, and then for the longest time now it's been artificial intelligence, which is like the most transformative technology that probably will ever exist. Yeah, absolutely. We get ask this question a lot of times about founders from all over the world that we invest or speak with is should I move to Silicon Valley and Matt, do you see Silicon Valley as a place still today that a founder that's working on AI and tech that they really must be there is the center of action, otherwise they're going to miss out and be left behind? I don't think it's required that you have to go to Silicon Valley in order to be successful. There's definitely examples of people who don't go to Silicon Valley and who become successful, but Silicon Valley is still the heart of technology, and if you are a young person and you want to throw yourself into this world completely, then you should. I still would strongly recommend that you should go move to Silicon Valley for a period of time. Because when you go outside and you walk down the streets, the people you bump into are the builders of technology. When you go to a party, the people there are the builders of technology. When you get coffee, the people next to you are the builders of technology. And you're gonna meet other people who are young like you. Who have their whole careers ahead of them. And what's going to happen is, you're going to build these really strong relationships. And like, this is a very long game. You're not just playing this for the first day or the first year or the first five years. This is something where the relationships you build, you can be working together with people for many decades. And so if you meet people who are also young when you're young and you build this really tight relationship. Then later when maybe you're both not in Silicon Valley you can be working together still. And they have built their career and you've built your career. So if you are thinking of being a tech and you have the means to move to Silicon Valley, you a hundred percent should do it for some period of time. Yeah, I think that makes a lot of sense. And going back to the time when you're there working as a BD for in this first job, how was the transition into Y Combinator? Was this was back in 2012, was something that you told yourself, okay, I need to go build my company. You had applied before. Y Combinator is basically for everyone who doesn't know, is the number one startup incubator in the entire world. It's based in Silicon Valley. And this is where some of the most incredible companies in the world have came from Airbnb, Twitch, Instacart, Coinbase, Stripe, all of these companies were actually started very young and funded by Y Combinator and they were helped in the very beginnings by the Y Combinator community. And if you are a startup founder, you're an aspiring startup founder one of the greatest investors that you can get is Y Combinator. It's basically this three month program to give you like very intensive like relationships and training and suggestions. And the whole goal there is build really quickly, find product market fit, go out and succeed. And it culminates at the end with something called demo day, where you and everybody else present to basically all the who's who of investors in the world who are looking to invest in the next big thing. And after I was at my first job at Ustream, I went and was lucky enough to get in after applying like three times into Y Combinator. And I took a company through Y Combinator in 2012. The company didn't end up succeeding, but we built an incredibly viral product called Trackspy, where celebrities could take advantage of social networks and viral loops to launch content at a very large scale. So any rapper at the time you could think of worked with us, like Lil Wayne, Drake, like all these people were the people we were working with. We just didn't make any money at all. And my Y Combinator class was actually a very unique one. Instacart, which is a massive company now, was in my Y Combinator class, just like a handful of people. Brian Armstrong, the founder of Coinbase, was in my Y Combinator class, and he was the only person who worked at Coinbase. It was just Brian. And so Bitcoin was still very early, like Mt. Gox was still around. I think like the hack happened, around then or like right after then. So the very fascinating time. So if you have the chance and you have an idea you think is special, I would definitely recommend applying to Y Combinator. It's an incredible place. Awesome. And going back to that moment when you had Brian Armstrong in the same class, do you remember what were the comments or the sentiment of people towards crypto? And even yourself, what have you thought about crypto or what is this guy out there building a Bitcoin exchange? What do we even need this for? It was very early. I think I bought my first Bitcoin in 2011 and I bought it on Mt. Gox. I ended up losing all of it in the hack that happened. So in 2012, I forget what the price of Bitcoin was, it was very low. Brian gave everybody in the class one, he said if you sign up for Coinbase, I'll give you a Bitcoin. And the sentiment was that not everybody, most people didn't even sign up because it was just this like ridiculous thing. And a Bitcoin also wasn't worth very much and people were confused. And to give you also another comparison, when we went out to go fundraise at the end, you have Brian Armstrong saying, hi, you have Mt. Gox Wild West of crypto. I'm gonna go super legit and build like the full legit version of Mt. Gox is like basically what he was saying. And he's oh we take transaction, percentages and like business model and there's a whole movement. Then you have me I say hi, I'm building viral products to help rappers launch music and we make no money. My company in the beginning raised more money than Brian raised with Coinbase. And so that's just another example of like how people didn't understand. Like it's so hard to be an investor, they didn't understand the Gary Tan, who is now the CEO of Y Combinator and is doing a fantastic job at the time he was just a partner at YC, which meant like he would help individual startups and give you advice and you can meet with him, Sam Altman, who is the founder of open AI was also a partner at the time. So these people are just walking around and giving you advice. Gary had just started investing in YC startups himself. And that year that I was in there, he invested in Instacart and he invested in Coinbase. And he invested in me. And two out of three of those became multi billion dollar companies. I was the only one that didn't, but I liked it, there's some sort of pattern matching that I fit in with there. But I'm sure you got a lot of experience and being in such a environment where you are today and you have a wealth of knowledge from that time. Absolutely. Yeah. I think if you're starting out now and you're going to go build a startup, the biggest mistake you can make is how can I replicate the most successful people and where they are today? This is the biggest mistake that you can do. What you need to do is you need to go back to the beginnings. And if you're going to try to replicate someone, you need to start at where they started because there's no way that you can just, bam, make Stripe what it is today. Bam, make Bitcoin what it is today. Bam, make Coinbase what it is today. You have to figure out where it started and then you have to progress up. And then through that, you're going to have to go find your own journey. What's one of the biggest values of me, if going through Y Combinator is I know these very rare tidbits of what certain people like and did and thought and said well before anybody knew who they were. Absolutely. And since you're in such a closed environment where unicorns came out, is there a secret ingredient that you have noticed? Was it the team that Brian hired? Was it the VCs on the cap table? Was there something specifically you noticed this keeps happening over and over or it was just a combination of elements? Also, the fact that the market is super favorable sometimes. Yeah, so this is the whole science of either starting a company or investing in companies, and it's constantly evolving and changing because narratives are changing, the economy's changing, the market's changing, the world's changing, and so I think one side of it that is clear is you need to have something that can impact a large number of people, and that when you have it, even in a not perfect version there are people out there who will just truly love it. Even if it's not perfect, they will just truly love it. You don't want 10,000 people who slightly care about something. If you can even have a hundred people who really care about something, like your customers that's where you want to start, right? Because then you can grow from there. You need to be doing something where your potential customers or your user base are super crazy passionate about it. This is like the most important thing to them. So that's one thing. The other thing is if you can somehow recognize a large movement. That with a group of people is like the number one thing they care about. But if you went and talked to most normal people, they have no idea what you're talking about. That is like a very unique opportunity where maybe you can be the one to shepherd and formalize this thing that this group of people really care about, and you can bring it to the masses, right? Brian Armstrong saw something interesting is happening with crypto, the underlying concept of a trustless system that can power lots of different things, money being one of them, but lots of other use cases. No matter if people are making money on it or not he saw that. Enough people cared about this, they cared about it really passionately, and if this got big, it would not get sorta big, it would be like, the biggest thing, right? He also had previously worked at Airbnb, he was a developer, he could code things, so he was also very smart. So if you take those two things, experience and ability to execute and he knew what Airbnb looked like as it got big. And then he had also found an interesting pocket of passionate people where if they were right, they would be really right. And he merged these two things. And then there's a huge element of luck there, right? Like just because he did those two things doesn't mean that he would have done them right. Doesn't mean that that's actually how the world would play out. Probably if you ask Brian today, is Bitcoin anywhere near as big as you think it will be, he would say, no, it's super small compared to how big I think it would be. Ability to execute group of people who care about something. If that something can get really big, it's going to be really ginormous, and then a good amount of luck. Absolutely, I think that makes a lot of sense, especially with crypto. He was on the vanguard, he was the first pioneer, and there was no roads. But there was a lot that could be walked. And I also noticed that myself, being in this space for ten plus years that everyone out there thinks I was crazy when speaking about this. People still think you're crazy. Probably you go talk to most people right now about crypto and they still think you're crazy. So I think that's a great sign to continue in this room, as they say, you're if you're here, you're early, absolutely. And Matt, going back to your story. So you went to the Y Combinator, then going into 2016, you mentioned you were one of the first people to use open AI's, API. How did you came into AI? What was the sentiment at the time? Especially, this was probably one of the first versions, if not the first version of Open AI release. I'm sure it was not this robust and probably the system would hallucinate a lot when you're giving them a prompt and what draw you into it? Actually a little bit before that in 2014, I made a little experiment called ZapChain, which was I thought it'd be fun to make a social network where instead of upvoting each other you gave people tiny amounts of bitcoin and this was before ICOs happened I think if we were a little bit later we could have done an ICO and that would have been an incredible way to monetize and build that. But I got all the who's who, anyone who was anyone at the time on crypto was using ZapChain. Vitalik did an AMA on ZapChain before he launched Ethereum to promote the launch of Ethereum with the people who were on there, so it was pretty crazy. 2016 I saw that you could start to build chatbots on places like Facebook Messenger and other platforms and AI was getting better and better. Nowhere near where it was today. We did not even have large language models. So all of the experiences people are used to today with ChatGPT, that didn't exist at all. It was way more basic. But it was very clear that this is where the world was going. You were going to get to a place where you could just talk to an AI and an AI would be able to do most anything and everything that a person could and maybe eventually even better. So in 2016 I started a company called Octane AI where you could create a chatbot and you could use very basic AI with that. And that company right now is profitable, makes millions of dollars a year. Has over 3,000 customers. We found a niche specifically in helping e commerce brands use AI to help their customers figure out what products they should buy. But because I've been on the forefront with this team building AI products when OpenAI I came out with their first beta for the OpenAI API, this was like late 2020, I was lucky enough to get an email from them to be invited to, I think, like the first couple 100 people who had access to this incredible GPT technology and back then when you used this is like the technology that became ChatGPT. It couldn't even do poetry. You would say hey, can you make a poem? Can you make something rhyme it? It wasn't even good enough to do that. It was very bad compared to where it is today. It was magic. You could do incredible things. And so ever since then, every single day, I've coded with LLMs. I've coded with OpenAI's API. I've coded with Google's and Anthropix and Llama and most recently DeepSeek because it's just like having magic in your hands. If crypto is like this thing that you can trust and it's a ledger and you can have all sorts of use cases for that, which is super important. What's fascinating about AI is it's intelligence as a resource that you can program and it's just exponentially getting better and better. So I started building games with it. I started building chatbots with it. I started exploring different techniques like RAG where there's this concept of giving they're called embedding vectors where you can take a chunk of text and you can basically give it an address in a like a thousand plus multi dimensional space. And then you can use that to find like correlations between different texts and you can use that to power chatbots, anyone who's interested in building an AI, look up RAG and it's a very basic once you start building with it. Yeah, I've been building with this technology since you could. And 2023 one of the interesting things that I saw was this rise of autonomous agents, and I saw people building auto GPT, which now is like top 10 most starred GitHub repo, like 170,000 stars, baby AGI, I think it has 20, 30,000 stars, and people were experimenting with what if, instead of you telling the AI what to do, what if you had the AI tell the AI what to do, and it could do that in a loop? And you could start to build AIs that were autonomous, and just kept coming up with a to do list, and they would do the to do list, and then they would keep going. And when I went and talked to a bunch of people about this, they'd never heard of it. So in early 2023, I wrote the number one article about autonomous agents. I went and talked to every single person who was doing anything with autonomous agents, including the founders of auto GPT, baby AGI, people from Nvidia, top investors. And when I published this article, a quarter million people read it, and I got super connected to anyone and everyone who was building something. I think a lot of companies started, because they read that article. And since then, I've just been building autonomous agents, and that's where YesNoError ended up coming from over a year later. A question that I have regarding the agents, what is the difference, the essential difference between an agent and a bot? What distinguishes in the system? So in some situations, I think that they can be people probably can use the word similarly. I think when people talk about autonomous agents though they're specifically referring to an AI or a bot that is not only doing what you are telling it to do, but it is deciding itself at certain points in its system and it's thinking what it should do next. If you ask ChatGPT a question, you're giving it a very direct ask, and it's just going to come back with one answer. So that's not really an autonomous agent. You ask it something and it gives it back and then the thing stops there. Whereas an autonomous agent, might be like, hey can you go analyze the most interesting science of the day? And so then maybe the first thing the agent says is, okay, I'm supposed to go figure out what the most interesting science of the day is. So what is the first thing I should probably do? Maybe the first thing I should probably do is I should probably go Google like different science topics. Then it goes and Googles it, then it pulls back results. Then it says, you know what, are these results I got good enough or do I need to come up with even better results? Maybe I need to come up with even better results. Then it generates a list of new searches it needs to do. Then it goes out and it does those searches. And then it pulls back those results. It says, okay, is the science actually in the content I just pulled back, or do I need to go look inside of the documents to find out, maybe it links to a PDF somewhere, maybe it links somewhere else. And then it keeps going in this loop, where it has an objective. And it's autonomously deciding what the next steps are to go figure out how to achieve that objective and AI is getting better and better because at first you could just interact with what the LLM knew itself, which was it's trained on all this public knowledge. So if you ask it a question, it knows the answer within its own like trained memory, but now they can search the internet. Most recently, Open AI talked about operators where now it can use a browser to go browse web pages. There's other platforms where they're building in APIs, where you can actually have the AI go like contact like a human, and then wait for a response from the human. You can have an AI call people, you can have an AI email people. At some point, the AI will be able to be in the physical realm with autonomous robots. I think that's like the difference between a bot like are you getting one simple answer back, or is it thinking constantly and going off to do different things you might not necessarily have told it to do to come back with that final result. I really liked the analogy that you said about how it's almost like magic, right? You feel like what we're creating here are in some ways they're frameworks, but we're essentially molding responses to be structured and to create sense out of what the AI is like spitting out and then giving it limbs, giving it voices, giving it the ability to act on its own and to take on these roles alongside people, which I think is really cool. Tying it to YesNoError, how would you describe YesNoError and what role will it play in advancing scientific research? YesNoError was started as an accident. I saw this rise of AI plus crypto and I found that super interesting. The idea of tokenizing agents. There's something very fascinating about tokenizing agents. But when I went and did a lot of research into what was out there, I thought I saw a lot of AI tech that didn't look very impressive. And obviously there's a lot of people speculate and that's like a huge part of crypto is people are looking to invest in and make money in a narrative grows and even technology that's built that's not very good. Can sometimes really not matter because if enough people are speculating on it, then the price goes up and half the people are really happy. I don't think that's doing justice to what is possible with tokenizing an agent, combining AI with crypto. And so the simple beginning of YesNoError was I just thought that the space deserved an example of something that was actually a really virtuous, good use case of combining AI plus crypto and like tokenizing the AI. And the way that I discovered the idea is I was scrolling through X and I saw this conversation with Mark Andreessen and who the founder, one of the founders of Andreessen Horowitz, one of the greatest VCs out there and Ethan Mullock, who's like a top leading AI expert and what they were talking about was super interesting that in October of 2024, there was this research paper peer reviewed that got a tremendous amount of press. All major media outlets covered it. That basically said that black cooking utensils were extremely toxic and that people have missed this. And it's so toxic that you actually, it's making you sick. You need to go throw away your black spatulas, all your, all of your black cooking utensils. You need to go in the kitchen and throw them away. So this was on TV. All major press, if you go Google black cooking utensils toxic, you'll go find like a ton of results. Two months later in December of 2024, so very recently, it turns out, people found out that this research paper that was peer reviewed, covered everywhere, actually had a mathematical error in it. The amount of toxicity was accidentally, maybe multiplied by 10. And so actually it turns out they're not toxic at all. Like not toxic enough to matter. And you don't need to throw these away. And the discussion between Ethan Mullock and Mark Andreessen was they had discovered that if you pass this paper over to Open AI o1 model and you simply say, are there any errors in this paper? 30 seconds it's basically instantly, it says, yes, one of these numbers is multiplied by 10. There's an error here. And they thought what if you went and replicated this if AI is good enough now to catch these errors that weren't caught. What if you went and did this with a thousand papers? And so I saw this I said look, I can totally build this. And why stop at a thousand? There's over 90 million research papers ever published. Why don't I build a prototype that will just start going through these papers and using more advanced techniques than simply asking is there an error? And let's go find out what's going on. And I thought, it's going to cost money to pay the AI to go process this. And no business would ever create this product because how are they making money from it? And if I create a token, then the token can almost act as a prediction market for if people believe that this type of AI auditing agent should exist. And so that was the thought. So within 24 hours I built this. Next day I said, okay, I think it's ready. Posted it, I launched a token. And it just immediately went crazy. I thought that this would be something that I could write about in an article. I could go tell people, hey, like something really interesting is happening with AI plus crypto, but it was like day one,$150 million worth of volume, like over 9,000 holders right away, millions of views on X. And I had this anyone out there who is an aspiring entrepreneur. You may hear this story and you may think, wow, Matt was like just jumping up for joy when this happened. Like I was very excited, but this is a large volume of activity that is happening that you need to compress and analyze and calm yourself. And what I did when this happened is I thought, you know what? We have lightning in a bottle. This is a very good idea. It's very virtuous. We should go find all the papers. We should go audit them and probably you don't even have to just stop at science. You could do so much more here. And so I decided very quickly, very thoughtfully, that I was going to go all in on this. And my co founder at Octane AI, Ben Parr, who I've also known for decades and has been with me since the beginning. He used to be the co editor of Mashable. He wrote a best selling book on marketing. He interviewed Mark Zuckerberg and Steve Jobs and all these incredible people. We decided, let's go all in on this. And that's what we've been doing ever since then. And right now, I think we're about a little over 30 days into it. Wow, yeah. Yeah, that is very cool. It's crazy how some things can just take off like right away. But many years in the making, right? It's a similar story with a lot of these AI projects. What we've seen is that some of these overnight successes, like everything with AI 16z, everything with Virtuals, they've been working on this for a long time. And it's not until now that we have the right timing in the market where you can tokenize these agents that people are really starting to pay attention. One thing is how do you guys deal with say, minimizing the hallucinations and not getting false positives where it's saying there's an error where maybe there actually isn't an error? This is something that the first version of YesNoError simply pulled the most recent research papers from Arxiv, which is an incredible source of new papers, like DeepSeek's newest paper was published on Arxiv. And it checked it for mathematical errors and a couple other things. And that system, even though that was v1, it's already processed, almost 10,000 papers. Over a hundred papers had mathematical errors in them. And I think one of the thoughts when you look at that is okay sure, it's found over a hundred papers with mathematical errors, but is it true? Are those errors actually errors, or is it just saying they're errors? And so what I did is on Arxiv, you can actually go pull up the author's email address. You have to just go type in a little captcha, and it gives you their email address. And I thought why don't I just go do that for the hundred papers, and then contact them and say hey, I made this AI agent, it's auditing papers, I audited yours. It said it found an error. I just want to double check if that's accurate. And of course, not everybody got back to me because that's just how email works, but 90 percent of people got back to me and almost every single one of them was very thankful and said that, yes in fact, there was an error there. And they acknowledged it. There was one situation where someone had written a paper and they'd actually written a paper about another paper that hadn't had an error in it. So their whole paper was actually built on the premise that another paper had an error. And so the AI did have a false positive where it saw that mathematical error in their paper because it was there. But incorrectly flagged their paper as being wrong. So then I had to go tell the AI: yo, you really gotta double check. Is it talking about a math error in something else? Or is it talking about a math error in this paper? So that was like V1 of the system. The V2 of this system that we've been building out is, there's multiple places that you can go pull research papers from. Arxiv is only one of them. There's a lot of places that you pull different types of papers from. So we've been building out a system where we can index papers from multiple different sources and we're continuously adding new sources into them. My first version was only extracting HTML, which actually meant that it's not pulling out images so if there's graphs or if there's diagrams, it's not pulling that in the new version does that and uses AI vision to basically transcribe and analyze the images as well. And then the new version also has a much better approach to extracting mathematical formulas to make sure that we keep consistency. So in the first version, the hundred errors we found are real, but maybe there were a couple more errors that we actually didn't catch. The other part of the new system is instead of just throwing the most advanced model at everything, which is the most costly model we've been experimenting with using cheaper models to do like a pre analysis to flag anything that might have an error. And then you can rerun that with the more expensive models. And we also announced a partnership with Brian Armstrong's company Research Hub, where they have a huge group of peer reviewers. They're basically disrupting, the peer review industry where they're making it really accessible for anyone to get peer reviewed by a human. If we're making it possible to be peer reviewed by an AI, they're making it possible to be peer reviewed by a human. And what we're doing with them is anytime we find an error, we're putting a system together so that we can actually have a real PhD human go and verify that error. So I think there's a lot more work to be done here to make this even much, much better, but that's where we're at right now in the immediate future. And then I imagine a lot of these papers are public, but what is the etiquette for expanding this to confidential information or training confidential data, intellectual property, copyrighted data. Is there a path for that? So we're starting with public research and I think it's important to just do a really good job with that. So if V1, we went through, almost 10,000 papers with a prototype version, V2 is going to allow us to go through millions of papers. And what we're looking to do is basically publish a protocol where we show. Hey, like this is actually working at scale and you always have to be open to the possibility that it's not working at scale, but everything that we've seen so far indicates that AI can catch a good amount of errors. Can it catch all of them right now? I don't know, but it can catch a lot. Like 1%, I think is like a pretty high percentage of papers that have errors in it. So the next version that we're doing is let's go through 100,000, let's go through a million public research papers, and then let's publish our findings from that and make maybe we even make it where you can go replicate it, right? This needs to be very trustworthy. If this is working at that scale, like it's very important for us that we are showing you exactly what we did and how we did it and what can happen. And our goal isn't to go out and say gotcha to all these like scientists who are trying to move science forward. I think it makes a lot of sense that we come out with something where when you're writing your research paper, or if you're like a news publication, you're going to go publish you know the black spatula paper first probably run it through YesNoError. And make sure everything is looking good and let's actually accelerate science by catching errors before they're out there and they're causing a problem. Now to your question, what do we do with private data? And how do we handle that? We don't have a solution built for it yet, but a couple of interesting things. So we've had a major health company that I posted on X about reach out like a 15 billion health care company. Where they're interested in using this internally to audit their medical papers and their internal research and their internal documents. And so I think as we continue to expand YesNoError and we focus on this public good, there are going to be more and more examples of large corporations who will want to do audits at large scale, and they will not necessarily want to publish those publicly, but they will want to pay for that. And so there's actually a really big business that you can build here by helping people analyze private data. And then on the other side, we're basically powering this public good where we're analyzing any amount of data that's out there that we can, and then publicly sharing the results of that. And on the model side, do you think the same applies where like models might be treated differently? There are people like they're hesitant of using DeepSeek, right? Google Gemini has shown that it's been skewed for some results. How do you balance which models you choose to use in your analyses? Yeah, so different models are trained on different amounts of data. I think DeepSeek is a really good example where there are certain things that if you ask DeepSeek, it actually says it doesn't know what you're talking about. There can be censorship, there can be biases inside of models, and that's something that we have to figure out. I think math can be a good starting point because it's more it's more accurate to be able to verify that with human whether math is true or false. With us, we started with O1, but what we've been experimenting with is, there can be benefits of running the same paper through multiple models. It's almost like, if you treated each a I model as like a different peer reviewer almost. And so you can the same way that if you had a paper and you wanted to be peer reviewed by humans, you would go reach out to different humans who they also have different agendas and different biases and different backgrounds and different education. You can almost think of AI models in a very similar sense. And so that's something that we're looking into. And so you would be running it multiple times on the same paper cross referencing the results, maybe creating like a confidence interval to create like a holistic report and analysis, right? It's not just like a binary, there are these issues or there are not. Yeah, I think you definitely have to include what models you used when you did it. And if model disagree on certain things. When we come out with a tool that maybe anybody can just like use and you can upload whatever document you want or put it URL maybe we allow you to choose which model you want to run it with. Maybe we let you run it with multiple models at the same time. And then there's some sort of like synthesized report based on what all the models say. I think it's very important to be as transparent as possible about all of this. Yeah. And now that the token is live and that the project is running, what role does the token play in the current ecosystem? We've retroactively been putting together a utility based plan for the token. The things that I think make the most amount of sense that we're looking to build is one, using the token to have YesNoError process a specific paper. That's something that makes a lot of sense. The other one that I'm very excited about, I'm very excited about the idea of tokens being prediction markets. And it's just a really great way for people to put their money where their mouth is, decide where they want things to focus. And if we're going to go process 90 million, 100 million papers, the big question becomes not whether we're going to go process them or not. We're definitely going to go process them. The big question becomes what order do you process them in? And so I think a very interesting use of the YNE token is if you had a large list of potential topics that YesNoError could be prioritizing. So maybe microplastics, maybe long COVID, maybe longevity maybe brain tumors or, specific drug or DeepSeek even. And then you could allow people to use the token to vote, almost on where YesNoError should prioritize its efforts on. The thing that I really love about YesNoError is no business would ever make this and I think that you could have a lot of agents that have a token where it just doesn't really make sense. But YesNoError uniquely does make a lot of sense because it is a public good. And so if a lot of people decide that they want to go research, MRNA or microplastics or whatever, YNE gives them the ability to point like the massive amount of AI resources in that direction. And I think that's very fascinating. Do you think anything there could be done with this idea of prediction markets? Because people always come back to speculation or gamification, tokenization, betting on certain conclusions to be true or not based off of the papers. Like hey, I bet that the black spatulas are actually bad, and then maybe it gets propped up by this is the results of the or even just like a specific paper. I feel like there's an error in this paper as I peer reviewed in myself. I'm willing to bet that there's actually an issue and so the agent should also take a look at it. And then also the idea of expanding this out, like offering this as like a Oracle or as like a feed of data for like other services to be built on this, which is similar to what people are doing now with like election results with sports betting, right? Like creating this as a data source, or even just having some kind of API who can access this as a data feed and then plug it into their own agents. I think that's very interesting. I think what you have to make sure of is that the prediction market doesn't impact the result of the prediction, right? So that's like the thing you have to make sure is really important there. But I think you can have an agent that does something that's good for the world. And it is open to work on a number of different topics. And then I think on the other side, you can have people who are interested in speculating. And depending on what they believe to be true, they could choose what they want to speculate on the agent working on or discovering and, on the one hand, now you have a public good that's being powered and being pointed in different places, and that's just good. There's no way to say that's not good, that's just good. And then, on the other side, you have people who are actually benefiting potentially from that speculation, and I think like in an open market and combined with the public good and in an area where a business would never make something that's just going off to audit all of science, this creates like a very interesting new dynamic between token holders and between an AI that costs money to run. So I definitely could see something like that happening. And the way that we've been building this system, we haven't really talked about this much, but the way we've been building the new system, not my prototype is everything is built on top of our own APIs that we've been producing. So the front end for YesNoError will be built on our own APIs, and we could decide in the future to open those up, because we actually designed the system like that. And then I guess this goes into third party developers is this a project that people can join? Is it a GitHub repo that developers can contribute to or through the community get involved in like spreading awareness? It's like how do you guys grow this project? How do people get involved with it? So right now we don't have a public GitHub repo. The biggest reason for that is my first version was a pretty hacked together prototype. And I wouldn't want to subject anyone to that GitHub repo. And then the second part is that we're not done building v2. I don't think it's off the table, by any means. We're basically building an AI agent framework that can check things for truth and validity. And we are applying that to science research specifically and what's most important to myself and to my co founder, Ben Parr, is we need to do a really good job with that first. We don't want to spread ourselves thin. There's always a lot of things that you can go do. But we're really here to audit science. We got to stay true to that. We got to build something that's really great. So that's what we're focused on. After we do that, there are a couple of interesting things that we could do. We could open source this, we could make APIs available. We could start to look into, maybe analyzing more than just science research. But we got to do a great job at this first. Yeah, so I really like what you're doing and I really see the parallels to something like, say Polymarket, where you arrive at this consensus of what people actually think is going to happen and that data is so valuable that it even ends up now on its own Bloomberg terminals and approaching this from a scientific view it's interesting because science, maybe you can't analyze enough papers to say that something is definitively true, but you can collect enough data that you can say something is not true, which is how most science is done, right? Like you can only disprove, you can't prove anything. So that is really cool cause that's aligned here. And then if you guys keep working on this, you'll eventually have all this data set falsities or things that you can then use that to derive, truth and like predictions from that. So I think that's really cool. And the other thing that's interesting is that, you have YesNoError go read a million papers, let's say 1 percent of them have errors in them. And so one part of the truth is saying, hey the truth is that 1 percent of these have errors and they have different varying amounts of impact. I do think there's something else that's interesting about those other 99 percent of papers that we read. Because I've been contacted by a lot of people who are researchers or research labs, or even people who are personally impacted. I had someone reach out to me who said they love YesNoError, they actually have a brain tumor themselves and they're trying to figure out if the research their doctors talking about is accurate or they're trying to do their own research to figure out if there's something else they could be bringing to their doctor or something else they could be exploring. And whether it's an individual or an actual like research department, if you say, there's 50,000 articles that have been written on the topic you're looking into, go read them all. That's not something that a human or a team of humans is actually physically capable of. So you have this weird situation where maybe there's a tremendous amount of data out there. And you need it, but you can't physically even access it to do the thing you're doing. So what's going to happen is you're only going to read the most cited ones, the ones that other people have agreed that are important and you don't know what their reasoning is for why those are the most cited papers. So I think there's also something that can happen on the side that's very interesting, that what if we make it possible for you to talk to an AI researcher under YesNoError that has read every paper about DeepSeek, has read every paper about this brain tumor, or the drug you're specifically looking at, and you can talk to it, and when it gives you answers, it can actually be pulling in citations from all these actual papers that it's looking at, and it would be pulling in citations from papers that we know don't have errors in them. So I think that looking for errors is important, but at the high level, democratizing truth, whatever that is, is also super important. Yeah. At that point, you definitely got to have the hallucinations down, like you got to be able to click on the sentence that it says, and it's got to expand like yours. Yeah, that's what I'm imagining is which we 100 percent can do that. We're building the system right now. We are going to come out with something like that. And basically, you'll be able to give it a topic, ask it a question, gives an answer. And it will say citations for every single thing it says. And you'll be able to click that and actually go see the real paper that citation is being pulled from. And I'm interested to see how people use that. And is this something that you think will get adopted by the scientific community, formally, like inside of journals and magazines or where does this exist in the scientific community? You said the researchers said, thank you for finding these errors, that's great. What about the greater scientific community? Are they still being dismissive of this or they're going to be all using this? So we're doing like a bottoms up approach on the individual researchers and students. So that's one approach. We're talking to some companies who want to apply this to their internal documents, like I was mentioning earlier. And then we are starting to talk to schools, like a top down approach where maybe YesNoError is something that they can be supplying to their students and their researchers before they come out with things. We haven't had any negative feedback yet on it. I think, so far, people are generally pro making research more accurate. Maybe we come across some group that, disagrees with that at some point, but we haven't run into that yet. What about the scientific magazines? Are they on board or are they oh, it should be human reviewed. That's how we make sure that everything is good. We haven't talked to them yet. We've mostly been focused on everyone who's actually writing the research, but the plan is to go talk to them. I think I want to be able to have gone through more papers first before we approach them and say, look, this is the exact protocol we did. We went through 100,000 papers. This is how it's working. I want to approach them with a very put together kind of result. And this is a different thing, but like within the crypto ecosystem, do you guys have like partners, are you working with different projects? Are you part of the launchpads or the ecosystems inside of crypto? Are there DeSci projects or like with the Solana ecosystem or what does that look like? Our first investor at Boost VC so Adam Draper, and Brayton Williams. And I've known those guys for a very long time. I lived in their tower in their basement for two years. I was an advisor on their last fund. They're obviously very well known in crypto. YesNoError was listed on Binance alpha which was not done by us. That was just picked up organically. Binance also just put out a research like a 20 page research paper on DeSci where they talked about YesNoError which was great. And then we've done a partnership with Research Hub, Brian Armstrong's company. And we're building relationships with a lot of people who are within the crypto space, but we're not part of any like specific programs. When you did the token, was that through a launchpad or Pumpfun or you just did it? Yeah. When I did the token, I just did the thing that I thought was easiest. So I just pushed it on Pumpfun, which, if you were to go back and do something differently next time, I would do it differently next time. But purpose of it in the beginning was the expectations were not that it would get to the size it is right now. I originally thought that this would be just a great example to show people what you could do. But now this is what I'm focused on. Yeah, there is an interesting trend of people launching utility tokens on Pumpfun just because it's become such a great mechanism for discovery by unmatched virality, discovery, liquidity and eyeballs to your project. It was really just the ease of use. I just knew I could fill out a form and push a button and the thing would be live and I did it like five minutes before I posted the tweet and the rest is history. And are there other DeSci projects or other AI projects that you see now in the space that are really interesting, maybe complimentary or just in parallel, but you really like what they're doing and you think it's going to be a part of this future? I really like what Research Hub is doing. I think that makes a lot of sense. I know they've been around for a while. The thing I'm looking for is people who are coming in from non crypto backgrounds to do things in crypto and are true believers in the idea of tokenizing things. Not DeSci, but one of the examples is Yohei has created Pippins, which is like an autonomous agent platform. And Yohei was the creator of Baby AGI. I wrote about him in my original article. That's someone who's coming from like Web2 to do Web3 things. So I'm interested in that. I know a lot of projects are started in crypto to look like they're doing something and then people speculate and then they're not actually doing anything. I'm obviously not excited about anyone who's doing something like that. Yeah, for sure. I think it's something that we try to tell our audience look at the background of the project, look at what they've been doing before crypto, or even if they've been doing crypto for a while, like what they've been doing in there, because the people that are working on some of the best projects here, they've been thinking about these problems for a long time. And that doesn't mean that if you're trying to start something that you're not welcome, right? Like you're welcome, but you really do have to show that you're really doing this for the right reasons because people can see through it. And especially the other founders is going to determine whether or not they want to work with you, whether they want to support you or they want to invest in you. Yep, yep, I would say that anyone out there who's thinking of making something you gotta be authentic about it, and I think that, probably you couldn't have launched YesNoError in any other way than it was launched. I think people could definitely tell in their gut that this was just something that completely happened organically and that's why it's been so successful and then you have the decision after that. Do you go all in and can you see this through and that's what we decided that we would do and so i think commitment and conviction on top of authenticity is a very powerful combo. Amazing. And Matt, do you think the scientific community and humanity is ready for what this Pandora box that you've just opened might uncover? There's been a lot of mistakes in research papers that led to impacting millions of people's lives. You think now as you feeding an AI model, all these scientific research papers, a lot of errors can start coming out and humanity is going to look back and like, Oh wow this is huge. Yeah. I think the greatest thing about DeSci and blockchain in general is you can't stop it. So it doesn't matter if you're ready, things happen. So that's one thing. And I think that's great. I think the other thing is. Let's say we go a thousand years from now, you look back in time, you say on the advent of AGI, whatever that means did somebody go use AI to point it at not a business solution and instead pointing it at a public good where they're auditing science and public information? Yeah, I think that's something that would exist. You think this even might lead to find the cure for cancer or to find certain things that have been out there, but we've just not been in the right direction due to misinformation or lack of data or for example on the model that you referenced for the paper for depth and grow, there is lack of Excel data and then there's austerity measures throughout Europe due to that, and there's no data. For us, right now, we're just focused on finding errors, and I think that it assists researchers in doing what they're focused on. So I wouldn't say that, YesNoError is not solving cancer right now. That's not something that I can claim that we're doing. But, are we helping people who are working on solving problems? Yes. Are we finding errors in papers helpful for the scientific community? Yes. Does it move things faster? Yes. Is AI becoming smarter than most people in the world? Yes. Is AI seemingly going to continue to get smarter and smarter than most people in the world to the point where it's actually smarter than everybody? Yes. That all seems very likely. Now in order to solve some of these larger problems, it's likely that AI will need to start being able to create simulations so that it can actually run scientific experiments in like simulated environments. Do I think that's going to be possible at some point? Yeah. So when do we solve cancer? I don't know. But is AI definitely the thing that will help us do that? If we ever do that? 100 million percent. Absolutely. And how far do you think we are from AI starting to run those simulations? One year away, five years away? I don't know. Maybe it's decades. Maybe we're in the simulation right now. That, I'm not sure. Makes sense. And one last question to wrap up these episodes. If you could travel back in time to the time when you started in 2016 working with AI, is there anything that you'd tell that version of yourself? Spend more time coding. There's nothing more important than learning to code. I think a lot of people can say, don't go into CS. Don't learn to code because AI has gotten so big, but there's nothing more important than being able to build and people who learn to code and learn to build are going to be the people who create and they're going to be able to do it a million times better because of AI, and it's not going to cancel them out. Makes a lot of sense. And Matt, where can our listeners find out more information about yourself and what you're building and staying connected? So if you want to follow me, go to X.com/MattPRD. And if you want to learn more about YesNoError go to yesnoerror.com. We'd love to have you involved. Awesome. It was a pleasure. Thank you guys.
Thanks for tuning in to the Chain Stories podcast, where disruptors become trailblazers. Don't forget to subscribe to hear more inspiring stories from those who are pushing boundaries in digital assets. Brought to you by Dropout Capital, where bold vision transforms into reality. Check out our social media links down below and visit us online at dropout. capital. And remember, the future belongs to those who dare to challenge the norm.