Feb. 12, 2025

"How I built a multi million dollar startup in 24 hours" - Interview with CEO of YesNoError

Send us a text

In this episode, Matt shares the incredible story of how he built YesNoError, an AI and crypto-powered platform, in just 24 hours—and raised over $150M in volume on its first day! 🚀

A self-taught entrepreneur, Matt skipped the traditional college route and went straight to Silicon Valley, where he connected with giants like Sam Altman and Brian Armstrong. His unconventional path didn’t stop him from launching successful companies and making waves in the AI, crypto, and Web3 spaces.

YesNoError has already audited over 10,000 research papers, uncovering 100+ errors, and has gained over 9,000 users on day one. With plans to process 1 million papers, the platform is creating a huge impact in the scientific research community.

If you’re fascinated by the future of AI, crypto, and scientific validation, this episode is packed with insights you won’t want to miss.

🎙️ THE ChainStories Podcast – powered by Dropout Capital and the Blockchain Education Network!

🐦 Twitter/X: https://x.com/BlockchainEdu
👤 Guest on LinkedIn: https://www.linkedin.com/in/mattschlicht/ 
🎙️ Host on LinkedIn: https://www.linkedin.com/in/cryptoniooo/

Support the show

-----------------------

What is BEN?

The Blockchain Education Network (BEN) is the largest and longest-running network of blockchain students, professors, and alumni around the world! We are on a journey to spur blockchain adoption by empowering our leaders to bring blockchain to their communities!

https://learn.blockchainedu.org/

-----------------------

Thank you for listening!

Want To Start Your Own Podcast? We Use BuzzSprout:
https://www.buzzsprout.com/

WEBVTT

00:00:00.729 --> 00:00:07.240
Welcome to the Chain Stories podcast, the podcast that celebrates disruptors who defy convention.

00:00:07.690 --> 00:00:14.849
Here, we dive into the bold stories of trailblazers who turned audacious ideas into billion dollar ventures.

00:00:18.248 --> 00:00:20.117
Welcome to the ChaninStories podcast.

00:00:20.388 --> 00:00:25.917
Today, I have a pioneer, someone that is working in the forefront of AI.

00:00:26.318 --> 00:00:30.437
Actually, something that I did not know that existed before and just came into my eyes.

00:00:30.897 --> 00:00:31.867
The founder of YesNoError.

00:00:31.888 --> 00:00:36.228
One of the first autonomous DeSci AI agents.

00:00:36.628 --> 00:00:38.857
Matt, it's a pleasure to have you in the show.

00:00:39.237 --> 00:00:43.317
And for our listeners, would you like to go and take a deep dive and intro yourself?

00:00:43.718 --> 00:00:45.807
Yeah, thank you so much for having me here.

00:00:45.817 --> 00:00:48.618
Happy to give you a quick background on myself.

00:00:48.618 --> 00:00:50.698
So I'm Matt Schlicht.

00:00:50.878 --> 00:00:55.398
I grew up in Southern California and I didn't go to college.

00:00:55.417 --> 00:01:01.527
Instead of going to college, I moved directly to Silicon Valley back in 2007, and I dove into the world of venture backed startups.

00:01:01.927 --> 00:01:06.918
I joined a company called Ustream, which was one of the first companies pioneering live video.

00:01:07.283 --> 00:01:09.983
And convince them that they should let me run all of product.

00:01:10.052 --> 00:01:16.242
Ustream ended up four years later getting acquired by IBM for over$150 million.

00:01:16.242 --> 00:01:21.563
And I got super lucky to get educated like in the streets instead of college.

00:01:21.957 --> 00:01:30.278
And when I was there, I got mentored by this amazing guy, Josh Elliman, who, he launched the Facebook API at Facebook, he grew Twitter.

00:01:30.468 --> 00:01:32.897
He invested in Musical.ly, which became TikTok.

00:01:32.947 --> 00:01:35.798
Just like an expert product person in Silicon Valley.

00:01:36.197 --> 00:01:42.097
I took a company through Icominator, same class that Brian Armstrong was in, the founder of Coinbase.

00:01:42.097 --> 00:01:43.177
He was the only person there.

00:01:43.528 --> 00:01:46.358
Forbes 30 under 30 twice whatever that is worth.

00:01:46.388 --> 00:01:49.278
And then I've taught myself to code over my journey.

00:01:49.748 --> 00:02:10.187
And I have been building in artificial intelligence since 2016 when open AI came out with their API, the first private version, I was one of the first, I don't know, a couple hundred people that had access to that in 2020 and have been on the forefront of building AI and autonomous agents.

00:02:10.548 --> 00:02:27.397
Since then, and most recently, I launched YesNoError, a AI agent that goes through all scientific research that has ever been published and uses the most advanced AI models to detect errors in these papers.

00:02:27.798 --> 00:02:37.038
And we've already found many errors and contacted the authors of the papers who confirmed that those errors were real and went on to go fix their papers.

00:02:37.068 --> 00:02:41.617
And we're on a mission to do that through all scientific research that's out there.

00:02:42.018 --> 00:02:43.108
That's quite a journey.

00:02:43.457 --> 00:02:48.617
And going back in time and a lot of our listeners might face this situation.

00:02:49.027 --> 00:02:54.448
Did you thought of for a second or even did your parents thought or told you like you need to go to college?

00:02:54.717 --> 00:02:56.527
Do you have to go and finish this out?

00:02:56.527 --> 00:02:59.448
You can't drop out or was this a self debate?

00:02:59.937 --> 00:03:03.198
What made you decide, okay, I'm going to go on my own and try.

00:03:03.598 --> 00:03:06.187
Yeah, so I don't have anything against college.

00:03:06.217 --> 00:03:08.538
I definitely wanted to go to a great college.

00:03:08.647 --> 00:03:12.048
The high school that I went to it was a very good high school.

00:03:12.057 --> 00:03:13.048
It's called Sage Hill.

00:03:13.057 --> 00:03:14.138
It's in Newport Beach.

00:03:14.587 --> 00:03:15.717
I didn't have a lot of money.

00:03:15.747 --> 00:03:18.057
Everybody else hadn't, around me had a lot of money.

00:03:18.518 --> 00:03:22.997
And what was interesting is the school I went to before that, it's called Waldorf.

00:03:23.048 --> 00:03:25.098
Anyone can look it up if they're interested in it.

00:03:25.407 --> 00:03:28.758
One of the pillars of Waldorf is that you don't use technology.

00:03:28.777 --> 00:03:30.087
So I didn't have a computer.

00:03:30.087 --> 00:03:31.397
I wasn't playing video games.

00:03:31.397 --> 00:03:36.307
I wasn't watching TV, but when I got to high school, I suddenly was like flooded with all this technology.

00:03:36.307 --> 00:03:38.608
And I was like, Oh my God, like a thousand people work at Google.

00:03:38.608 --> 00:03:39.747
This is like super crazy.

00:03:40.048 --> 00:03:41.298
And I got my first laptop.

00:03:41.318 --> 00:03:43.418
And so in high school, I didn't do any homework.

00:03:43.518 --> 00:03:53.177
All day long, all night long, all I did was build things on the internet, just super fascinated that you could go on there, nobody cared how old you were, you could just invent anything that you wanted.

00:03:53.407 --> 00:03:57.388
And so junior year of high school, I actually ended up getting kicked out.

00:03:57.548 --> 00:04:01.848
They asked me not to come back for senior year, which was obviously super upsetting.

00:04:01.858 --> 00:04:28.793
And so over summer, I actually got access to the school's email system, and I hacked into it, and I found like an error in their system, and I told them about it, so I didn't exploit it or anything, I said, hey, I actually found an issue in your email system I can access everybody's emails I think you should probably fix that, and because of that, they actually asked me to come back and finish out senior year, which was awesome.

00:04:29.153 --> 00:04:32.507
My grades weren't super good, so I didn't get into any like great colleges.

00:04:32.526 --> 00:04:38.947
And so the option was like, do I go to a not great college or should I just fight my way in to Silicon Valley?

00:04:39.276 --> 00:04:44.906
And that's what I did in two years, two or three years later, I actually went back and did the commencement speech for the high school.

00:04:45.307 --> 00:04:46.297
Fascinating.

00:04:46.367 --> 00:04:49.516
And so pretty much you taught yourself how to code during high school.

00:04:49.916 --> 00:04:58.576
And you had swamped with these materials in this world of digital economy, digital information, and you just took a straight dive into it.

00:04:58.846 --> 00:05:02.536
Was it your dream to one day go to Silicon Valley and build your startup?

00:05:02.567 --> 00:05:04.416
How did this start within you?

00:05:04.617 --> 00:05:09.456
As I grew up, I always wanted to be an inventor and I didn't know what that meant.

00:05:09.456 --> 00:05:11.456
I just wanted to create things.

00:05:11.896 --> 00:05:18.302
And what I discovered in high school, when I got access to the internet, this was like when Facebook was coming out.

00:05:18.351 --> 00:05:24.572
I remember being one of the first people who got like a Gmail account and you like had three email invites and you could send it to other people.

00:05:24.971 --> 00:05:28.521
What's fascinating about the internet is it's not a physical place.

00:05:28.961 --> 00:05:34.012
Anybody can grab a street corner and get access to every single person who's on the internet.

00:05:34.382 --> 00:05:44.302
And so if you can go build something digitally, you have the potential of the entire world, or at least anyone who's online, to interact with that thing you made.

00:05:44.312 --> 00:05:51.732
And so in high school, that just, if I was going to invent something, I was going to invent something that was digital, that was on the internet.

00:05:52.012 --> 00:05:56.531
And ever since then, yeah, my dream became, I'm going to go to Silicon Valley.

00:05:56.531 --> 00:06:02.271
And so the moment I graduated high school, I got a job, I worked my way in there.

00:06:02.576 --> 00:06:07.257
Started as an intern, moved up to business development, and then moved up to leading all of product.

00:06:07.656 --> 00:06:08.776
And yeah, that was the dream.

00:06:08.797 --> 00:06:12.846
And then ever since then, I've just been working with people inside of Silicon Valley.

00:06:13.307 --> 00:06:19.057
Working with investors and just staying on the forefront of technology.

00:06:19.057 --> 00:06:34.216
At first it was live stream video, then it was social networks, and then it was crypto for a little bit, and then for the longest time now it's been artificial intelligence, which is like the most transformative technology that probably will ever exist.

00:06:34.617 --> 00:06:35.326
Yeah, absolutely.

00:06:35.326 --> 00:06:54.101
We get ask this question a lot of times about founders from all over the world that we invest or speak with is should I move to Silicon Valley and Matt, do you see Silicon Valley as a place still today that a founder that's working on AI and tech that they really must be there is the center of action, otherwise they're going to miss out and be left behind?

00:06:54.502 --> 00:06:59.512
I don't think it's required that you have to go to Silicon Valley in order to be successful.

00:06:59.721 --> 00:07:19.411
There's definitely examples of people who don't go to Silicon Valley and who become successful, but Silicon Valley is still the heart of technology, and if you are a young person and you want to throw yourself into this world completely, then you should.

00:07:19.812 --> 00:07:24.281
I still would strongly recommend that you should go move to Silicon Valley for a period of time.

00:07:24.531 --> 00:07:31.471
Because when you go outside and you walk down the streets, the people you bump into are the builders of technology.

00:07:31.502 --> 00:07:35.072
When you go to a party, the people there are the builders of technology.

00:07:35.072 --> 00:07:39.132
When you get coffee, the people next to you are the builders of technology.

00:07:39.451 --> 00:07:42.942
And you're gonna meet other people who are young like you.

00:07:43.146 --> 00:07:44.896
Who have their whole careers ahead of them.

00:07:45.206 --> 00:07:48.447
And what's going to happen is, you're going to build these really strong relationships.

00:07:48.757 --> 00:07:50.697
And like, this is a very long game.

00:07:50.906 --> 00:07:55.976
You're not just playing this for the first day or the first year or the first five years.

00:07:56.336 --> 00:08:01.927
This is something where the relationships you build, you can be working together with people for many decades.

00:08:02.276 --> 00:08:07.427
And so if you meet people who are also young when you're young and you build this really tight relationship.

00:08:07.737 --> 00:08:13.716
Then later when maybe you're both not in Silicon Valley you can be working together still.

00:08:14.067 --> 00:08:16.737
And they have built their career and you've built your career.

00:08:16.747 --> 00:08:24.021
So if you are thinking of being a tech and you have the means to move to Silicon Valley, you a hundred percent should do it for some period of time.

00:08:24.422 --> 00:08:25.802
Yeah, I think that makes a lot of sense.

00:08:25.951 --> 00:08:33.272
And going back to the time when you're there working as a BD for in this first job, how was the transition into Y Combinator?

00:08:33.422 --> 00:08:38.572
Was this was back in 2012, was something that you told yourself, okay, I need to go build my company.

00:08:38.581 --> 00:08:39.822
You had applied before.

00:08:40.022 --> 00:08:46.591
Y Combinator is basically for everyone who doesn't know, is the number one startup incubator in the entire world.

00:08:46.591 --> 00:08:47.802
It's based in Silicon Valley.

00:08:48.201 --> 00:09:07.577
And this is where some of the most incredible companies in the world have came from Airbnb, Twitch, Instacart, Coinbase, Stripe, all of these companies were actually started very young and funded by Y Combinator and they were helped in the very beginnings by the Y Combinator community.

00:09:07.577 --> 00:09:14.807
And if you are a startup founder, you're an aspiring startup founder one of the greatest investors that you can get is Y Combinator.

00:09:14.807 --> 00:09:22.298
It's basically this three month program to give you like very intensive like relationships and training and suggestions.

00:09:22.327 --> 00:09:26.847
And the whole goal there is build really quickly, find product market fit, go out and succeed.

00:09:26.849 --> 00:09:38.163
And it culminates at the end with something called demo day, where you and everybody else present to basically all the who's who of investors in the world who are looking to invest in the next big thing.

00:09:38.587 --> 00:09:48.548
And after I was at my first job at Ustream, I went and was lucky enough to get in after applying like three times into Y Combinator.

00:09:48.697 --> 00:09:51.518
And I took a company through Y Combinator in 2012.

00:09:51.898 --> 00:10:06.057
The company didn't end up succeeding, but we built an incredibly viral product called Trackspy, where celebrities could take advantage of social networks and viral loops to launch content at a very large scale.

00:10:06.057 --> 00:10:12.018
So any rapper at the time you could think of worked with us, like Lil Wayne, Drake, like all these people were the people we were working with.

00:10:12.067 --> 00:10:13.418
We just didn't make any money at all.

00:10:13.918 --> 00:10:17.888
And my Y Combinator class was actually a very unique one.

00:10:18.307 --> 00:10:24.798
Instacart, which is a massive company now, was in my Y Combinator class, just like a handful of people.

00:10:25.187 --> 00:10:32.268
Brian Armstrong, the founder of Coinbase, was in my Y Combinator class, and he was the only person who worked at Coinbase.

00:10:32.268 --> 00:10:33.898
It was just Brian.

00:10:34.187 --> 00:10:37.227
And so Bitcoin was still very early, like Mt.

00:10:37.227 --> 00:10:38.508
Gox was still around.

00:10:38.518 --> 00:10:42.008
I think like the hack happened, around then or like right after then.

00:10:42.347 --> 00:10:44.018
So the very fascinating time.

00:10:44.097 --> 00:10:50.847
So if you have the chance and you have an idea you think is special, I would definitely recommend applying to Y Combinator.

00:10:50.847 --> 00:10:51.538
It's an incredible place.

00:10:51.937 --> 00:10:52.488
Awesome.

00:10:52.548 --> 00:11:00.508
And going back to that moment when you had Brian Armstrong in the same class, do you remember what were the comments or the sentiment of people towards crypto?

00:11:00.778 --> 00:11:06.177
And even yourself, what have you thought about crypto or what is this guy out there building a Bitcoin exchange?

00:11:06.177 --> 00:11:07.577
What do we even need this for?

00:11:07.977 --> 00:11:09.837
It was very early.

00:11:09.998 --> 00:11:16.538
I think I bought my first Bitcoin in 2011 and I bought it on Mt.

00:11:16.557 --> 00:11:17.057
Gox.

00:11:17.557 --> 00:11:20.217
I ended up losing all of it in the hack that happened.

00:11:20.658 --> 00:11:25.008
So in 2012, I forget what the price of Bitcoin was, it was very low.

00:11:25.048 --> 00:11:28.967
Brian gave everybody in the class one, he said if you sign up for Coinbase, I'll give you a Bitcoin.

00:11:29.368 --> 00:11:36.057
And the sentiment was that not everybody, most people didn't even sign up because it was just this like ridiculous thing.

00:11:36.518 --> 00:11:38.967
And a Bitcoin also wasn't worth very much and people were confused.

00:11:39.317 --> 00:11:47.607
And to give you also another comparison, when we went out to go fundraise at the end, you have Brian Armstrong saying, hi, you have Mt.

00:11:47.628 --> 00:11:49.363
Gox Wild West of crypto.

00:11:49.403 --> 00:11:53.623
I'm gonna go super legit and build like the full legit version of Mt.

00:11:53.623 --> 00:11:55.692
Gox is like basically what he was saying.

00:11:56.322 --> 00:12:00.423
And he's oh we take transaction, percentages and like business model and there's a whole movement.

00:12:00.732 --> 00:12:07.743
Then you have me I say hi, I'm building viral products to help rappers launch music and we make no money.

00:12:08.143 --> 00:12:13.883
My company in the beginning raised more money than Brian raised with Coinbase.

00:12:13.913 --> 00:12:19.302
And so that's just another example of like how people didn't understand.

00:12:19.702 --> 00:12:41.173
Like it's so hard to be an investor, they didn't understand the Gary Tan, who is now the CEO of Y Combinator and is doing a fantastic job at the time he was just a partner at YC, which meant like he would help individual startups and give you advice and you can meet with him, Sam Altman, who is the founder of open AI was also a partner at the time.

00:12:41.173 --> 00:12:43.852
So these people are just walking around and giving you advice.

00:12:44.253 --> 00:12:49.498
Gary had just started investing in YC startups himself.

00:12:49.498 --> 00:12:55.238
And that year that I was in there, he invested in Instacart and he invested in Coinbase.

00:12:55.677 --> 00:12:57.518
And he invested in me.

00:12:57.937 --> 00:13:01.947
And two out of three of those became multi billion dollar companies.

00:13:01.947 --> 00:13:06.778
I was the only one that didn't, but I liked it, there's some sort of pattern matching that I fit in with there.

00:13:07.177 --> 00:13:14.927
But I'm sure you got a lot of experience and being in such a environment where you are today and you have a wealth of knowledge from that time.

00:13:15.317 --> 00:13:15.408
Absolutely.

00:13:15.618 --> 00:13:15.798
Yeah.

00:13:15.847 --> 00:13:24.847
I think if you're starting out now and you're going to go build a startup, the biggest mistake you can make is how can I replicate the most successful people and where they are today?

00:13:25.097 --> 00:13:26.888
This is the biggest mistake that you can do.

00:13:27.067 --> 00:13:29.717
What you need to do is you need to go back to the beginnings.

00:13:29.758 --> 00:13:37.927
And if you're going to try to replicate someone, you need to start at where they started because there's no way that you can just, bam, make Stripe what it is today.

00:13:38.087 --> 00:13:39.587
Bam, make Bitcoin what it is today.

00:13:39.638 --> 00:13:41.238
Bam, make Coinbase what it is today.

00:13:41.447 --> 00:13:46.138
You have to figure out where it started and then you have to progress up.

00:13:46.168 --> 00:13:48.798
And then through that, you're going to have to go find your own journey.

00:13:48.857 --> 00:14:03.398
What's one of the biggest values of me, if going through Y Combinator is I know these very rare tidbits of what certain people like and did and thought and said well before anybody knew who they were.

00:14:03.873 --> 00:14:04.543
Absolutely.

00:14:04.552 --> 00:14:11.932
And since you're in such a closed environment where unicorns came out, is there a secret ingredient that you have noticed?

00:14:11.942 --> 00:14:14.182
Was it the team that Brian hired?

00:14:14.192 --> 00:14:16.263
Was it the VCs on the cap table?

00:14:16.482 --> 00:14:22.013
Was there something specifically you noticed this keeps happening over and over or it was just a combination of elements?

00:14:22.072 --> 00:14:25.243
Also, the fact that the market is super favorable sometimes.

00:14:25.643 --> 00:14:58.138
Yeah, so this is the whole science of either starting a company or investing in companies, and it's constantly evolving and changing because narratives are changing, the economy's changing, the market's changing, the world's changing, and so I think one side of it that is clear is you need to have something that can impact a large number of people, and that when you have it, even in a not perfect version there are people out there who will just truly love it.

00:14:58.187 --> 00:15:00.538
Even if it's not perfect, they will just truly love it.

00:15:00.538 --> 00:15:03.638
You don't want 10,000 people who slightly care about something.

00:15:03.898 --> 00:15:10.197
If you can even have a hundred people who really care about something, like your customers that's where you want to start, right?

00:15:10.207 --> 00:15:11.618
Because then you can grow from there.

00:15:11.687 --> 00:15:19.008
You need to be doing something where your potential customers or your user base are super crazy passionate about it.

00:15:19.008 --> 00:15:20.857
This is like the most important thing to them.

00:15:20.868 --> 00:15:22.057
So that's one thing.

00:15:22.457 --> 00:15:28.048
The other thing is if you can somehow recognize a large movement.

00:15:28.447 --> 00:15:33.607
That with a group of people is like the number one thing they care about.

00:15:33.977 --> 00:15:39.388
But if you went and talked to most normal people, they have no idea what you're talking about.

00:15:39.697 --> 00:15:51.442
That is like a very unique opportunity where maybe you can be the one to shepherd and formalize this thing that this group of people really care about, and you can bring it to the masses, right?

00:15:51.442 --> 00:16:02.863
Brian Armstrong saw something interesting is happening with crypto, the underlying concept of a trustless system that can power lots of different things, money being one of them, but lots of other use cases.

00:16:03.143 --> 00:16:06.322
No matter if people are making money on it or not he saw that.

00:16:06.523 --> 00:16:13.773
Enough people cared about this, they cared about it really passionately, and if this got big, it would not get sorta big, it would be like, the biggest thing, right?

00:16:14.182 --> 00:16:20.283
He also had previously worked at Airbnb, he was a developer, he could code things, so he was also very smart.

00:16:20.658 --> 00:16:26.538
So if you take those two things, experience and ability to execute and he knew what Airbnb looked like as it got big.

00:16:26.888 --> 00:16:33.658
And then he had also found an interesting pocket of passionate people where if they were right, they would be really right.

00:16:33.817 --> 00:16:35.427
And he merged these two things.

00:16:35.477 --> 00:16:38.148
And then there's a huge element of luck there, right?

00:16:38.168 --> 00:16:42.097
Like just because he did those two things doesn't mean that he would have done them right.

00:16:42.393 --> 00:16:45.332
Doesn't mean that that's actually how the world would play out.

00:16:45.702 --> 00:16:52.163
Probably if you ask Brian today, is Bitcoin anywhere near as big as you think it will be, he would say, no, it's super small compared to how big I think it would be.

00:16:52.482 --> 00:16:55.903
Ability to execute group of people who care about something.

00:16:56.258 --> 00:17:01.217
If that something can get really big, it's going to be really ginormous, and then a good amount of luck.

00:17:01.618 --> 00:17:04.248
Absolutely, I think that makes a lot of sense, especially with crypto.

00:17:04.248 --> 00:17:08.258
He was on the vanguard, he was the first pioneer, and there was no roads.

00:17:08.657 --> 00:17:10.087
But there was a lot that could be walked.

00:17:10.407 --> 00:17:17.758
And I also noticed that myself, being in this space for ten plus years that everyone out there thinks I was crazy when speaking about this.

00:17:17.807 --> 00:17:18.978
People still think you're crazy.

00:17:19.178 --> 00:17:22.867
Probably you go talk to most people right now about crypto and they still think you're crazy.

00:17:23.268 --> 00:17:29.298
So I think that's a great sign to continue in this room, as they say, you're if you're here, you're early, absolutely.

00:17:29.417 --> 00:17:31.627
And Matt, going back to your story.

00:17:31.778 --> 00:17:41.548
So you went to the Y Combinator, then going into 2016, you mentioned you were one of the first people to use open AI's, API.

00:17:41.907 --> 00:17:44.397
How did you came into AI?

00:17:44.478 --> 00:17:47.048
What was the sentiment at the time?

00:17:47.278 --> 00:17:51.958
Especially, this was probably one of the first versions, if not the first version of Open AI release.

00:17:51.958 --> 00:18:00.377
I'm sure it was not this robust and probably the system would hallucinate a lot when you're giving them a prompt and what draw you into it?

00:18:00.577 --> 00:18:20.442
Actually a little bit before that in 2014, I made a little experiment called ZapChain, which was I thought it'd be fun to make a social network where instead of upvoting each other you gave people tiny amounts of bitcoin and this was before ICOs happened I think if we were a little bit later we could have done an ICO and that would have been an incredible way to monetize and build that.

00:18:20.722 --> 00:18:26.067
But I got all the who's who, anyone who was anyone at the time on crypto was using ZapChain.

00:18:26.498 --> 00:18:34.387
Vitalik did an AMA on ZapChain before he launched Ethereum to promote the launch of Ethereum with the people who were on there, so it was pretty crazy.

00:18:34.798 --> 00:18:44.218
2016 I saw that you could start to build chatbots on places like Facebook Messenger and other platforms and AI was getting better and better.

00:18:44.548 --> 00:18:46.637
Nowhere near where it was today.

00:18:46.637 --> 00:18:49.278
We did not even have large language models.

00:18:49.278 --> 00:18:54.147
So all of the experiences people are used to today with ChatGPT, that didn't exist at all.

00:18:54.178 --> 00:18:55.307
It was way more basic.

00:18:55.708 --> 00:18:57.917
But it was very clear that this is where the world was going.

00:18:57.938 --> 00:19:08.798
You were going to get to a place where you could just talk to an AI and an AI would be able to do most anything and everything that a person could and maybe eventually even better.

00:19:09.248 --> 00:19:18.038
So in 2016 I started a company called Octane AI where you could create a chatbot and you could use very basic AI with that.

00:19:18.528 --> 00:19:23.087
And that company right now is profitable, makes millions of dollars a year.

00:19:23.188 --> 00:19:24.788
Has over 3,000 customers.

00:19:24.837 --> 00:19:33.077
We found a niche specifically in helping e commerce brands use AI to help their customers figure out what products they should buy.

00:19:33.307 --> 00:20:00.788
But because I've been on the forefront with this team building AI products when OpenAI I came out with their first beta for the OpenAI API, this was like late 2020, I was lucky enough to get an email from them to be invited to, I think, like the first couple 100 people who had access to this incredible GPT technology and back then when you used this is like the technology that became ChatGPT.

00:20:00.807 --> 00:20:02.288
It couldn't even do poetry.

00:20:02.337 --> 00:20:03.867
You would say hey, can you make a poem?

00:20:03.867 --> 00:20:05.077
Can you make something rhyme it?

00:20:05.087 --> 00:20:06.758
It wasn't even good enough to do that.

00:20:06.768 --> 00:20:08.917
It was very bad compared to where it is today.

00:20:09.167 --> 00:20:09.738
It was magic.

00:20:09.738 --> 00:20:11.018
You could do incredible things.

00:20:11.048 --> 00:20:16.067
And so ever since then, every single day, I've coded with LLMs.

00:20:16.067 --> 00:20:17.897
I've coded with OpenAI's API.

00:20:17.897 --> 00:20:25.617
I've coded with Google's and Anthropix and Llama and most recently DeepSeek because it's just like having magic in your hands.

00:20:25.617 --> 00:20:31.653
If crypto is like this thing that you can trust and it's a ledger and you can have all sorts of use cases for that, which is super important.

00:20:31.932 --> 00:20:41.532
What's fascinating about AI is it's intelligence as a resource that you can program and it's just exponentially getting better and better.

00:20:41.532 --> 00:20:43.833
So I started building games with it.

00:20:43.833 --> 00:20:45.502
I started building chatbots with it.

00:20:45.502 --> 00:20:59.393
I started exploring different techniques like RAG where there's this concept of giving they're called embedding vectors where you can take a chunk of text and you can basically give it an address in a like a thousand plus multi dimensional space.

00:20:59.653 --> 00:21:09.762
And then you can use that to find like correlations between different texts and you can use that to power chatbots, anyone who's interested in building an AI, look up RAG and it's a very basic once you start building with it.

00:21:09.823 --> 00:21:12.732
Yeah, I've been building with this technology since you could.

00:21:13.163 --> 00:21:39.987
And 2023 one of the interesting things that I saw was this rise of autonomous agents, and I saw people building auto GPT, which now is like top 10 most starred GitHub repo, like 170,000 stars, baby AGI, I think it has 20, 30,000 stars, and people were experimenting with what if, instead of you telling the AI what to do, what if you had the AI tell the AI what to do, and it could do that in a loop?

00:21:40.376 --> 00:21:47.027
And you could start to build AIs that were autonomous, and just kept coming up with a to do list, and they would do the to do list, and then they would keep going.

00:21:47.426 --> 00:21:50.826
And when I went and talked to a bunch of people about this, they'd never heard of it.

00:21:50.836 --> 00:21:55.186
So in early 2023, I wrote the number one article about autonomous agents.

00:21:55.186 --> 00:22:03.797
I went and talked to every single person who was doing anything with autonomous agents, including the founders of auto GPT, baby AGI, people from Nvidia, top investors.

00:22:04.196 --> 00:22:12.307
And when I published this article, a quarter million people read it, and I got super connected to anyone and everyone who was building something.

00:22:12.346 --> 00:22:16.166
I think a lot of companies started, because they read that article.

00:22:16.477 --> 00:22:23.767
And since then, I've just been building autonomous agents, and that's where YesNoError ended up coming from over a year later.

00:22:24.166 --> 00:22:31.967
A question that I have regarding the agents, what is the difference, the essential difference between an agent and a bot?

00:22:32.366 --> 00:22:34.376
What distinguishes in the system?

00:22:34.777 --> 00:22:41.537
So in some situations, I think that they can be people probably can use the word similarly.

00:22:41.936 --> 00:23:01.011
I think when people talk about autonomous agents though they're specifically referring to an AI or a bot that is not only doing what you are telling it to do, but it is deciding itself at certain points in its system and it's thinking what it should do next.

00:23:01.291 --> 00:23:07.902
If you ask ChatGPT a question, you're giving it a very direct ask, and it's just going to come back with one answer.

00:23:07.912 --> 00:23:10.332
So that's not really an autonomous agent.

00:23:10.352 --> 00:23:13.872
You ask it something and it gives it back and then the thing stops there.

00:23:14.271 --> 00:23:22.201
Whereas an autonomous agent, might be like, hey can you go analyze the most interesting science of the day?

00:23:22.301 --> 00:23:27.632
And so then maybe the first thing the agent says is, okay, I'm supposed to go figure out what the most interesting science of the day is.

00:23:27.632 --> 00:23:29.872
So what is the first thing I should probably do?

00:23:29.892 --> 00:23:36.061
Maybe the first thing I should probably do is I should probably go Google like different science topics.

00:23:36.061 --> 00:23:38.082
Then it goes and Googles it, then it pulls back results.

00:23:38.082 --> 00:23:43.852
Then it says, you know what, are these results I got good enough or do I need to come up with even better results?

00:23:43.862 --> 00:23:45.362
Maybe I need to come up with even better results.

00:23:45.372 --> 00:23:47.602
Then it generates a list of new searches it needs to do.

00:23:47.892 --> 00:23:49.832
Then it goes out and it does those searches.

00:23:50.041 --> 00:23:51.751
And then it pulls back those results.

00:23:51.751 --> 00:24:00.521
It says, okay, is the science actually in the content I just pulled back, or do I need to go look inside of the documents to find out, maybe it links to a PDF somewhere, maybe it links somewhere else.

00:24:00.791 --> 00:24:04.281
And then it keeps going in this loop, where it has an objective.

00:24:04.442 --> 00:24:19.182
And it's autonomously deciding what the next steps are to go figure out how to achieve that objective and AI is getting better and better because at first you could just interact with what the LLM knew itself, which was it's trained on all this public knowledge.

00:24:19.182 --> 00:24:25.922
So if you ask it a question, it knows the answer within its own like trained memory, but now they can search the internet.

00:24:25.981 --> 00:24:32.442
Most recently, Open AI talked about operators where now it can use a browser to go browse web pages.

00:24:32.721 --> 00:24:40.231
There's other platforms where they're building in APIs, where you can actually have the AI go like contact like a human, and then wait for a response from the human.

00:24:40.571 --> 00:24:43.971
You can have an AI call people, you can have an AI email people.

00:24:44.231 --> 00:24:48.041
At some point, the AI will be able to be in the physical realm with autonomous robots.

00:24:48.051 --> 00:24:58.821
I think that's like the difference between a bot like are you getting one simple answer back, or is it thinking constantly and going off to do different things you might not necessarily have told it to do to come back with that final result.

00:24:59.144 --> 00:25:02.515
I really liked the analogy that you said about how it's almost like magic, right?

00:25:02.555 --> 00:25:20.384
You feel like what we're creating here are in some ways they're frameworks, but we're essentially molding responses to be structured and to create sense out of what the AI is like spitting out and then giving it limbs, giving it voices, giving it the ability to act on its own and to take on these roles alongside people, which I think is really cool.

00:25:20.835 --> 00:25:27.865
Tying it to YesNoError, how would you describe YesNoError and what role will it play in advancing scientific research?

00:25:28.295 --> 00:25:31.224
YesNoError was started as an accident.

00:25:31.664 --> 00:25:37.085
I saw this rise of AI plus crypto and I found that super interesting.

00:25:37.095 --> 00:25:38.994
The idea of tokenizing agents.

00:25:39.315 --> 00:25:42.595
There's something very fascinating about tokenizing agents.

00:25:42.974 --> 00:25:51.882
But when I went and did a lot of research into what was out there, I thought I saw a lot of AI tech that didn't look very impressive.

00:25:52.201 --> 00:26:04.682
And obviously there's a lot of people speculate and that's like a huge part of crypto is people are looking to invest in and make money in a narrative grows and even technology that's built that's not very good.

00:26:04.852 --> 00:26:10.892
Can sometimes really not matter because if enough people are speculating on it, then the price goes up and half the people are really happy.

00:26:11.311 --> 00:26:17.807
I don't think that's doing justice to what is possible with tokenizing an agent, combining AI with crypto.

00:26:18.196 --> 00:26:33.955
And so the simple beginning of YesNoError was I just thought that the space deserved an example of something that was actually a really virtuous, good use case of combining AI plus crypto and like tokenizing the AI.

00:26:34.355 --> 00:26:59.851
And the way that I discovered the idea is I was scrolling through X and I saw this conversation with Mark Andreessen and who the founder, one of the founders of Andreessen Horowitz, one of the greatest VCs out there and Ethan Mullock, who's like a top leading AI expert and what they were talking about was super interesting that in October of 2024, there was this research paper peer reviewed that got a tremendous amount of press.

00:27:00.300 --> 00:27:01.770
All major media outlets covered it.

00:27:02.030 --> 00:27:06.901
That basically said that black cooking utensils were extremely toxic and that people have missed this.

00:27:06.901 --> 00:27:09.760
And it's so toxic that you actually, it's making you sick.

00:27:09.770 --> 00:27:13.790
You need to go throw away your black spatulas, all your, all of your black cooking utensils.

00:27:13.790 --> 00:27:15.040
You need to go in the kitchen and throw them away.

00:27:15.250 --> 00:27:16.320
So this was on TV.

00:27:16.641 --> 00:27:21.300
All major press, if you go Google black cooking utensils toxic, you'll go find like a ton of results.

00:27:21.701 --> 00:27:33.601
Two months later in December of 2024, so very recently, it turns out, people found out that this research paper that was peer reviewed, covered everywhere, actually had a mathematical error in it.

00:27:33.820 --> 00:27:39.141
The amount of toxicity was accidentally, maybe multiplied by 10.

00:27:39.506 --> 00:27:42.105
And so actually it turns out they're not toxic at all.

00:27:42.155 --> 00:27:43.935
Like not toxic enough to matter.

00:27:44.185 --> 00:27:45.536
And you don't need to throw these away.

00:27:46.016 --> 00:27:57.865
And the discussion between Ethan Mullock and Mark Andreessen was they had discovered that if you pass this paper over to Open AI o1 model and you simply say, are there any errors in this paper?

00:27:58.286 --> 00:28:03.945
30 seconds it's basically instantly, it says, yes, one of these numbers is multiplied by 10.

00:28:04.125 --> 00:28:05.125
There's an error here.

00:28:05.526 --> 00:28:11.955
And they thought what if you went and replicated this if AI is good enough now to catch these errors that weren't caught.

00:28:12.145 --> 00:28:14.076
What if you went and did this with a thousand papers?

00:28:14.105 --> 00:28:16.715
And so I saw this I said look, I can totally build this.

00:28:16.955 --> 00:28:18.286
And why stop at a thousand?

00:28:18.296 --> 00:28:20.806
There's over 90 million research papers ever published.

00:28:21.026 --> 00:28:29.306
Why don't I build a prototype that will just start going through these papers and using more advanced techniques than simply asking is there an error?

00:28:29.766 --> 00:28:31.776
And let's go find out what's going on.

00:28:31.806 --> 00:28:37.455
And I thought, it's going to cost money to pay the AI to go process this.

00:28:37.576 --> 00:28:43.526
And no business would ever create this product because how are they making money from it?

00:28:44.010 --> 00:28:53.740
And if I create a token, then the token can almost act as a prediction market for if people believe that this type of AI auditing agent should exist.

00:28:53.941 --> 00:28:55.000
And so that was the thought.

00:28:55.151 --> 00:28:56.871
So within 24 hours I built this.

00:28:56.921 --> 00:28:58.550
Next day I said, okay, I think it's ready.

00:28:58.990 --> 00:29:00.770
Posted it, I launched a token.

00:29:01.096 --> 00:29:04.236
And it just immediately went crazy.

00:29:04.316 --> 00:29:07.276
I thought that this would be something that I could write about in an article.

00:29:07.316 --> 00:29:20.056
I could go tell people, hey, like something really interesting is happening with AI plus crypto, but it was like day one,$150 million worth of volume, like over 9,000 holders right away, millions of views on X.

00:29:20.086 --> 00:29:23.895
And I had this anyone out there who is an aspiring entrepreneur.

00:29:24.145 --> 00:29:30.115
You may hear this story and you may think, wow, Matt was like just jumping up for joy when this happened.

00:29:30.516 --> 00:29:39.665
Like I was very excited, but this is a large volume of activity that is happening that you need to compress and analyze and calm yourself.

00:29:40.096 --> 00:29:42.855
And what I did when this happened is I thought, you know what?

00:29:43.036 --> 00:29:44.375
We have lightning in a bottle.

00:29:44.816 --> 00:29:46.256
This is a very good idea.

00:29:46.415 --> 00:29:47.546
It's very virtuous.

00:29:47.645 --> 00:29:49.316
We should go find all the papers.

00:29:49.445 --> 00:29:53.941
We should go audit them and probably you don't even have to just stop at science.

00:29:53.990 --> 00:29:55.250
You could do so much more here.

00:29:55.270 --> 00:30:00.721
And so I decided very quickly, very thoughtfully, that I was going to go all in on this.

00:30:00.740 --> 00:30:07.211
And my co founder at Octane AI, Ben Parr, who I've also known for decades and has been with me since the beginning.

00:30:07.611 --> 00:30:09.351
He used to be the co editor of Mashable.

00:30:09.351 --> 00:30:11.201
He wrote a best selling book on marketing.

00:30:11.201 --> 00:30:15.131
He interviewed Mark Zuckerberg and Steve Jobs and all these incredible people.

00:30:15.530 --> 00:30:17.381
We decided, let's go all in on this.

00:30:17.586 --> 00:30:19.455
And that's what we've been doing ever since then.

00:30:19.455 --> 00:30:21.905
And right now, I think we're about a little over 30 days into it.

00:30:22.306 --> 00:30:22.875
Wow, yeah.

00:30:22.976 --> 00:30:24.165
Yeah, that is very cool.

00:30:24.185 --> 00:30:27.695
It's crazy how some things can just take off like right away.

00:30:27.746 --> 00:30:29.695
But many years in the making, right?

00:30:29.955 --> 00:30:32.486
It's a similar story with a lot of these AI projects.

00:30:32.486 --> 00:30:39.695
What we've seen is that some of these overnight successes, like everything with AI 16z, everything with Virtuals, they've been working on this for a long time.

00:30:40.131 --> 00:30:46.721
And it's not until now that we have the right timing in the market where you can tokenize these agents that people are really starting to pay attention.

00:30:47.121 --> 00:30:54.820
One thing is how do you guys deal with say, minimizing the hallucinations and not getting false positives where it's saying there's an error where maybe there actually isn't an error?

00:30:55.221 --> 00:31:07.911
This is something that the first version of YesNoError simply pulled the most recent research papers from Arxiv, which is an incredible source of new papers, like DeepSeek's newest paper was published on Arxiv.

00:31:08.310 --> 00:31:12.840
And it checked it for mathematical errors and a couple other things.

00:31:13.240 --> 00:31:18.961
And that system, even though that was v1, it's already processed, almost 10,000 papers.

00:31:19.240 --> 00:31:22.851
Over a hundred papers had mathematical errors in them.

00:31:23.310 --> 00:31:29.980
And I think one of the thoughts when you look at that is okay sure, it's found over a hundred papers with mathematical errors, but is it true?

00:31:30.040 --> 00:31:33.721
Are those errors actually errors, or is it just saying they're errors?

00:31:34.191 --> 00:31:39.431
And so what I did is on Arxiv, you can actually go pull up the author's email address.

00:31:39.431 --> 00:31:42.371
You have to just go type in a little captcha, and it gives you their email address.

00:31:42.671 --> 00:31:51.631
And I thought why don't I just go do that for the hundred papers, and then contact them and say hey, I made this AI agent, it's auditing papers, I audited yours.

00:31:51.631 --> 00:31:52.621
It said it found an error.

00:31:52.830 --> 00:31:54.861
I just want to double check if that's accurate.

00:31:55.260 --> 00:32:06.871
And of course, not everybody got back to me because that's just how email works, but 90 percent of people got back to me and almost every single one of them was very thankful and said that, yes in fact, there was an error there.

00:32:07.181 --> 00:32:07.971
And they acknowledged it.

00:32:07.980 --> 00:32:15.560
There was one situation where someone had written a paper and they'd actually written a paper about another paper that hadn't had an error in it.

00:32:15.570 --> 00:32:19.030
So their whole paper was actually built on the premise that another paper had an error.

00:32:19.260 --> 00:32:25.711
And so the AI did have a false positive where it saw that mathematical error in their paper because it was there.

00:32:26.086 --> 00:32:29.256
But incorrectly flagged their paper as being wrong.

00:32:29.635 --> 00:32:32.786
So then I had to go tell the AI: yo, you really gotta double check.

00:32:32.826 --> 00:32:35.486
Is it talking about a math error in something else?

00:32:35.486 --> 00:32:37.736
Or is it talking about a math error in this paper?

00:32:37.945 --> 00:32:40.286
So that was like V1 of the system.

00:32:40.556 --> 00:32:46.695
The V2 of this system that we've been building out is, there's multiple places that you can go pull research papers from.

00:32:46.865 --> 00:32:47.945
Arxiv is only one of them.

00:32:47.945 --> 00:32:50.776
There's a lot of places that you pull different types of papers from.

00:32:50.786 --> 00:32:59.236
So we've been building out a system where we can index papers from multiple different sources and we're continuously adding new sources into them.

00:32:59.625 --> 00:33:15.147
My first version was only extracting HTML, which actually meant that it's not pulling out images so if there's graphs or if there's diagrams, it's not pulling that in the new version does that and uses AI vision to basically transcribe and analyze the images as well.

00:33:15.468 --> 00:33:22.008
And then the new version also has a much better approach to extracting mathematical formulas to make sure that we keep consistency.

00:33:22.008 --> 00:33:27.857
So in the first version, the hundred errors we found are real, but maybe there were a couple more errors that we actually didn't catch.

00:33:28.127 --> 00:33:42.978
The other part of the new system is instead of just throwing the most advanced model at everything, which is the most costly model we've been experimenting with using cheaper models to do like a pre analysis to flag anything that might have an error.

00:33:43.008 --> 00:33:46.617
And then you can rerun that with the more expensive models.

00:33:46.667 --> 00:33:55.327
And we also announced a partnership with Brian Armstrong's company Research Hub, where they have a huge group of peer reviewers.

00:33:55.337 --> 00:34:01.188
They're basically disrupting, the peer review industry where they're making it really accessible for anyone to get peer reviewed by a human.

00:34:01.238 --> 00:34:05.167
If we're making it possible to be peer reviewed by an AI, they're making it possible to be peer reviewed by a human.

00:34:05.627 --> 00:34:13.297
And what we're doing with them is anytime we find an error, we're putting a system together so that we can actually have a real PhD human go and verify that error.

00:34:13.297 --> 00:34:21.797
So I think there's a lot more work to be done here to make this even much, much better, but that's where we're at right now in the immediate future.

00:34:22.038 --> 00:34:32.193
And then I imagine a lot of these papers are public, but what is the etiquette for expanding this to confidential information or training confidential data, intellectual property, copyrighted data.

00:34:32.282 --> 00:34:33.242
Is there a path for that?

00:34:33.643 --> 00:34:38.943
So we're starting with public research and I think it's important to just do a really good job with that.

00:34:38.952 --> 00:34:46.132
So if V1, we went through, almost 10,000 papers with a prototype version, V2 is going to allow us to go through millions of papers.

00:34:46.592 --> 00:34:49.873
And what we're looking to do is basically publish a protocol where we show.

00:34:50.217 --> 00:35:00.675
Hey, like this is actually working at scale and you always have to be open to the possibility that it's not working at scale, but everything that we've seen so far indicates that AI can catch a good amount of errors.

00:35:00.684 --> 00:35:02.215
Can it catch all of them right now?

00:35:02.554 --> 00:35:05.034
I don't know, but it can catch a lot.

00:35:05.065 --> 00:35:09.425
Like 1%, I think is like a pretty high percentage of papers that have errors in it.

00:35:09.445 --> 00:35:22.824
So the next version that we're doing is let's go through 100,000, let's go through a million public research papers, and then let's publish our findings from that and make maybe we even make it where you can go replicate it, right?

00:35:22.833 --> 00:35:24.463
This needs to be very trustworthy.

00:35:24.824 --> 00:35:32.228
If this is working at that scale, like it's very important for us that we are showing you exactly what we did and how we did it and what can happen.

00:35:32.389 --> 00:35:37.768
And our goal isn't to go out and say gotcha to all these like scientists who are trying to move science forward.

00:35:38.148 --> 00:35:51.458
I think it makes a lot of sense that we come out with something where when you're writing your research paper, or if you're like a news publication, you're going to go publish you know the black spatula paper first probably run it through YesNoError.

00:35:51.784 --> 00:36:01.333
And make sure everything is looking good and let's actually accelerate science by catching errors before they're out there and they're causing a problem.

00:36:01.554 --> 00:36:04.634
Now to your question, what do we do with private data?

00:36:04.704 --> 00:36:06.014
And how do we handle that?

00:36:06.393 --> 00:36:09.974
We don't have a solution built for it yet, but a couple of interesting things.

00:36:09.983 --> 00:36:17.704
So we've had a major health company that I posted on X about reach out like a 15 billion health care company.

00:36:17.704 --> 00:36:25.974
Where they're interested in using this internally to audit their medical papers and their internal research and their internal documents.

00:36:26.295 --> 00:36:43.264
And so I think as we continue to expand YesNoError and we focus on this public good, there are going to be more and more examples of large corporations who will want to do audits at large scale, and they will not necessarily want to publish those publicly, but they will want to pay for that.

00:36:43.284 --> 00:36:50.985
And so there's actually a really big business that you can build here by helping people analyze private data.

00:36:51.215 --> 00:37:00.724
And then on the other side, we're basically powering this public good where we're analyzing any amount of data that's out there that we can, and then publicly sharing the results of that.

00:37:01.125 --> 00:37:06.385
And on the model side, do you think the same applies where like models might be treated differently?

00:37:06.385 --> 00:37:09.304
There are people like they're hesitant of using DeepSeek, right?

00:37:09.324 --> 00:37:12.144
Google Gemini has shown that it's been skewed for some results.

00:37:12.195 --> 00:37:15.394
How do you balance which models you choose to use in your analyses?

00:37:15.795 --> 00:37:19.304
Yeah, so different models are trained on different amounts of data.

00:37:19.335 --> 00:37:25.791
I think DeepSeek is a really good example where there are certain things that if you ask DeepSeek, it actually says it doesn't know what you're talking about.

00:37:25.791 --> 00:37:30.960
There can be censorship, there can be biases inside of models, and that's something that we have to figure out.

00:37:31.251 --> 00:37:41.150
I think math can be a good starting point because it's more it's more accurate to be able to verify that with human whether math is true or false.

00:37:41.550 --> 00:37:50.670
With us, we started with O1, but what we've been experimenting with is, there can be benefits of running the same paper through multiple models.

00:37:51.121 --> 00:37:55.451
It's almost like, if you treated each a I model as like a different peer reviewer almost.

00:37:55.780 --> 00:38:06.110
And so you can the same way that if you had a paper and you wanted to be peer reviewed by humans, you would go reach out to different humans who they also have different agendas and different biases and different backgrounds and different education.

00:38:06.481 --> 00:38:10.050
You can almost think of AI models in a very similar sense.

00:38:10.161 --> 00:38:11.760
And so that's something that we're looking into.

00:38:11.960 --> 00:38:22.141
And so you would be running it multiple times on the same paper cross referencing the results, maybe creating like a confidence interval to create like a holistic report and analysis, right?

00:38:22.141 --> 00:38:24.661
It's not just like a binary, there are these issues or there are not.

00:38:25.061 --> 00:38:28.681
Yeah, I think you definitely have to include what models you used when you did it.

00:38:28.701 --> 00:38:30.990
And if model disagree on certain things.

00:38:31.411 --> 00:38:41.150
When we come out with a tool that maybe anybody can just like use and you can upload whatever document you want or put it URL maybe we allow you to choose which model you want to run it with.

00:38:41.201 --> 00:38:43.971
Maybe we let you run it with multiple models at the same time.

00:38:44.021 --> 00:38:48.001
And then there's some sort of like synthesized report based on what all the models say.

00:38:48.010 --> 00:38:51.860
I think it's very important to be as transparent as possible about all of this.

00:38:52.260 --> 00:38:52.550
Yeah.

00:38:52.951 --> 00:38:59.501
And now that the token is live and that the project is running, what role does the token play in the current ecosystem?

00:38:59.900 --> 00:39:05.780
We've retroactively been putting together a utility based plan for the token.

00:39:06.161 --> 00:39:16.740
The things that I think make the most amount of sense that we're looking to build is one, using the token to have YesNoError process a specific paper.

00:39:17.121 --> 00:39:18.550
That's something that makes a lot of sense.

00:39:18.860 --> 00:39:24.851
The other one that I'm very excited about, I'm very excited about the idea of tokens being prediction markets.

00:39:24.920 --> 00:39:31.990
And it's just a really great way for people to put their money where their mouth is, decide where they want things to focus.

00:39:32.280 --> 00:39:39.190
And if we're going to go process 90 million, 100 million papers, the big question becomes not whether we're going to go process them or not.

00:39:39.190 --> 00:39:40.311
We're definitely going to go process them.

00:39:40.320 --> 00:39:43.795
The big question becomes what order do you process them in?

00:39:43.976 --> 00:39:55.246
And so I think a very interesting use of the YNE token is if you had a large list of potential topics that YesNoError could be prioritizing.

00:39:55.246 --> 00:40:02.465
So maybe microplastics, maybe long COVID, maybe longevity maybe brain tumors or, specific drug or DeepSeek even.

00:40:02.835 --> 00:40:10.976
And then you could allow people to use the token to vote, almost on where YesNoError should prioritize its efforts on.

00:40:11.056 --> 00:40:22.306
The thing that I really love about YesNoError is no business would ever make this and I think that you could have a lot of agents that have a token where it just doesn't really make sense.

00:40:22.666 --> 00:40:27.166
But YesNoError uniquely does make a lot of sense because it is a public good.

00:40:27.346 --> 00:40:39.525
And so if a lot of people decide that they want to go research, MRNA or microplastics or whatever, YNE gives them the ability to point like the massive amount of AI resources in that direction.

00:40:39.576 --> 00:40:40.945
And I think that's very fascinating.

00:40:41.346 --> 00:40:45.706
Do you think anything there could be done with this idea of prediction markets?

00:40:45.706 --> 00:40:54.755
Because people always come back to speculation or gamification, tokenization, betting on certain conclusions to be true or not based off of the papers.

00:40:54.755 --> 00:41:04.585
Like hey, I bet that the black spatulas are actually bad, and then maybe it gets propped up by this is the results of the or even just like a specific paper.

00:41:04.795 --> 00:41:08.065
I feel like there's an error in this paper as I peer reviewed in myself.

00:41:08.365 --> 00:41:12.235
I'm willing to bet that there's actually an issue and so the agent should also take a look at it.

00:41:12.715 --> 00:41:24.606
And then also the idea of expanding this out, like offering this as like a Oracle or as like a feed of data for like other services to be built on this, which is similar to what people are doing now with like election results with sports betting, right?

00:41:24.606 --> 00:41:31.626
Like creating this as a data source, or even just having some kind of API who can access this as a data feed and then plug it into their own agents.

00:41:32.025 --> 00:41:33.246
I think that's very interesting.

00:41:33.255 --> 00:41:39.255
I think what you have to make sure of is that the prediction market doesn't impact the result of the prediction, right?

00:41:39.266 --> 00:41:42.016
So that's like the thing you have to make sure is really important there.

00:41:42.376 --> 00:41:47.246
But I think you can have an agent that does something that's good for the world.

00:41:47.715 --> 00:41:53.186
And it is open to work on a number of different topics.

00:41:53.326 --> 00:41:57.536
And then I think on the other side, you can have people who are interested in speculating.

00:41:57.990 --> 00:42:12.221
And depending on what they believe to be true, they could choose what they want to speculate on the agent working on or discovering and, on the one hand, now you have a public good that's being powered and being pointed in different places, and that's just good.

00:42:12.271 --> 00:42:14.030
There's no way to say that's not good, that's just good.

00:42:14.221 --> 00:42:35.166
And then, on the other side, you have people who are actually benefiting potentially from that speculation, and I think like in an open market and combined with the public good and in an area where a business would never make something that's just going off to audit all of science, this creates like a very interesting new dynamic between token holders and between an AI that costs money to run.

00:42:35.175 --> 00:42:38.076
So I definitely could see something like that happening.

00:42:38.505 --> 00:42:48.115
And the way that we've been building this system, we haven't really talked about this much, but the way we've been building the new system, not my prototype is everything is built on top of our own APIs that we've been producing.

00:42:48.126 --> 00:42:57.876
So the front end for YesNoError will be built on our own APIs, and we could decide in the future to open those up, because we actually designed the system like that.

00:42:58.076 --> 00:43:03.235
And then I guess this goes into third party developers is this a project that people can join?

00:43:03.235 --> 00:43:09.545
Is it a GitHub repo that developers can contribute to or through the community get involved in like spreading awareness?

00:43:09.596 --> 00:43:11.315
It's like how do you guys grow this project?

00:43:11.326 --> 00:43:12.445
How do people get involved with it?

00:43:12.846 --> 00:43:15.126
So right now we don't have a public GitHub repo.

00:43:15.235 --> 00:43:20.585
The biggest reason for that is my first version was a pretty hacked together prototype.

00:43:20.646 --> 00:43:23.416
And I wouldn't want to subject anyone to that GitHub repo.

00:43:23.755 --> 00:43:26.175
And then the second part is that we're not done building v2.

00:43:26.186 --> 00:43:28.206
I don't think it's off the table, by any means.

00:43:28.286 --> 00:43:36.925
We're basically building an AI agent framework that can check things for truth and validity.

00:43:36.945 --> 00:43:48.905
And we are applying that to science research specifically and what's most important to myself and to my co founder, Ben Parr, is we need to do a really good job with that first.

00:43:48.965 --> 00:43:50.295
We don't want to spread ourselves thin.

00:43:50.356 --> 00:43:52.275
There's always a lot of things that you can go do.

00:43:52.655 --> 00:43:54.396
But we're really here to audit science.

00:43:54.396 --> 00:43:55.516
We got to stay true to that.

00:43:55.516 --> 00:43:56.905
We got to build something that's really great.

00:43:56.916 --> 00:43:58.306
So that's what we're focused on.

00:43:58.706 --> 00:44:02.496
After we do that, there are a couple of interesting things that we could do.

00:44:02.516 --> 00:44:05.525
We could open source this, we could make APIs available.

00:44:05.525 --> 00:44:11.295
We could start to look into, maybe analyzing more than just science research.

00:44:11.416 --> 00:44:13.255
But we got to do a great job at this first.

00:44:13.655 --> 00:44:39.695
Yeah, so I really like what you're doing and I really see the parallels to something like, say Polymarket, where you arrive at this consensus of what people actually think is going to happen and that data is so valuable that it even ends up now on its own Bloomberg terminals and approaching this from a scientific view it's interesting because science, maybe you can't analyze enough papers to say that something is definitively true, but you can collect enough data that you can say something is not true, which is how most science is done, right?

00:44:39.695 --> 00:44:42.356
Like you can only disprove, you can't prove anything.

00:44:42.715 --> 00:44:45.275
So that is really cool cause that's aligned here.

00:44:45.275 --> 00:44:53.592
And then if you guys keep working on this, you'll eventually have all this data set falsities or things that you can then use that to derive, truth and like predictions from that.

00:44:53.592 --> 00:44:54.492
So I think that's really cool.

00:44:54.762 --> 00:45:01.603
And the other thing that's interesting is that, you have YesNoError go read a million papers, let's say 1 percent of them have errors in them.

00:45:02.103 --> 00:45:10.302
And so one part of the truth is saying, hey the truth is that 1 percent of these have errors and they have different varying amounts of impact.

00:45:10.702 --> 00:45:15.802
I do think there's something else that's interesting about those other 99 percent of papers that we read.

00:45:15.983 --> 00:45:24.603
Because I've been contacted by a lot of people who are researchers or research labs, or even people who are personally impacted.

00:45:24.813 --> 00:45:41.307
I had someone reach out to me who said they love YesNoError, they actually have a brain tumor themselves and they're trying to figure out if the research their doctors talking about is accurate or they're trying to do their own research to figure out if there's something else they could be bringing to their doctor or something else they could be exploring.

00:45:41.797 --> 00:45:51.657
And whether it's an individual or an actual like research department, if you say, there's 50,000 articles that have been written on the topic you're looking into, go read them all.

00:45:51.987 --> 00:45:56.878
That's not something that a human or a team of humans is actually physically capable of.

00:45:56.878 --> 00:46:01.518
So you have this weird situation where maybe there's a tremendous amount of data out there.

00:46:01.918 --> 00:46:06.608
And you need it, but you can't physically even access it to do the thing you're doing.

00:46:06.637 --> 00:46:15.818
So what's going to happen is you're only going to read the most cited ones, the ones that other people have agreed that are important and you don't know what their reasoning is for why those are the most cited papers.

00:46:15.818 --> 00:46:49.478
So I think there's also something that can happen on the side that's very interesting, that what if we make it possible for you to talk to an AI researcher under YesNoError that has read every paper about DeepSeek, has read every paper about this brain tumor, or the drug you're specifically looking at, and you can talk to it, and when it gives you answers, it can actually be pulling in citations from all these actual papers that it's looking at, and it would be pulling in citations from papers that we know don't have errors in them.

00:46:49.478 --> 00:46:58.168
So I think that looking for errors is important, but at the high level, democratizing truth, whatever that is, is also super important.

00:46:58.217 --> 00:46:58.588
Yeah.

00:46:58.588 --> 00:47:05.717
At that point, you definitely got to have the hallucinations down, like you got to be able to click on the sentence that it says, and it's got to expand like yours.

00:47:06.117 --> 00:47:10.487
Yeah, that's what I'm imagining is which we 100 percent can do that.

00:47:10.757 --> 00:47:12.168
We're building the system right now.

00:47:12.177 --> 00:47:13.737
We are going to come out with something like that.

00:47:14.088 --> 00:47:19.338
And basically, you'll be able to give it a topic, ask it a question, gives an answer.

00:47:19.568 --> 00:47:23.438
And it will say citations for every single thing it says.

00:47:23.467 --> 00:47:29.188
And you'll be able to click that and actually go see the real paper that citation is being pulled from.

00:47:29.518 --> 00:47:31.648
And I'm interested to see how people use that.

00:47:32.047 --> 00:47:41.878
And is this something that you think will get adopted by the scientific community, formally, like inside of journals and magazines or where does this exist in the scientific community?

00:47:42.077 --> 00:47:45.208
You said the researchers said, thank you for finding these errors, that's great.

00:47:45.458 --> 00:47:47.237
What about the greater scientific community?

00:47:47.237 --> 00:47:50.288
Are they still being dismissive of this or they're going to be all using this?

00:47:50.688 --> 00:47:56.088
So we're doing like a bottoms up approach on the individual researchers and students.

00:47:56.097 --> 00:47:57.137
So that's one approach.

00:47:57.588 --> 00:48:03.728
We're talking to some companies who want to apply this to their internal documents, like I was mentioning earlier.

00:48:04.148 --> 00:48:15.608
And then we are starting to talk to schools, like a top down approach where maybe YesNoError is something that they can be supplying to their students and their researchers before they come out with things.

00:48:15.677 --> 00:48:18.697
We haven't had any negative feedback yet on it.

00:48:18.748 --> 00:48:23.978
I think, so far, people are generally pro making research more accurate.

00:48:24.398 --> 00:48:29.248
Maybe we come across some group that, disagrees with that at some point, but we haven't run into that yet.

00:48:29.447 --> 00:48:30.967
What about the scientific magazines?

00:48:30.978 --> 00:48:34.137
Are they on board or are they oh, it should be human reviewed.

00:48:34.137 --> 00:48:36.418
That's how we make sure that everything is good.

00:48:36.617 --> 00:48:37.668
We haven't talked to them yet.

00:48:37.947 --> 00:48:43.168
We've mostly been focused on everyone who's actually writing the research, but the plan is to go talk to them.

00:48:43.418 --> 00:48:50.407
I think I want to be able to have gone through more papers first before we approach them and say, look, this is the exact protocol we did.

00:48:50.407 --> 00:48:51.777
We went through 100,000 papers.

00:48:51.777 --> 00:48:52.748
This is how it's working.

00:48:53.077 --> 00:48:56.568
I want to approach them with a very put together kind of result.

00:48:56.967 --> 00:49:03.018
And this is a different thing, but like within the crypto ecosystem, do you guys have like partners, are you working with different projects?

00:49:03.018 --> 00:49:07.117
Are you part of the launchpads or the ecosystems inside of crypto?

00:49:07.177 --> 00:49:12.117
Are there DeSci projects or like with the Solana ecosystem or what does that look like?

00:49:12.518 --> 00:49:17.362
Our first investor at Boost VC so Adam Draper, and Brayton Williams.

00:49:17.362 --> 00:49:19.293
And I've known those guys for a very long time.

00:49:19.293 --> 00:49:21.952
I lived in their tower in their basement for two years.

00:49:21.983 --> 00:49:23.932
I was an advisor on their last fund.

00:49:24.342 --> 00:49:26.693
They're obviously very well known in crypto.

00:49:26.963 --> 00:49:31.112
YesNoError was listed on Binance alpha which was not done by us.

00:49:31.112 --> 00:49:32.322
That was just picked up organically.

00:49:32.552 --> 00:49:39.612
Binance also just put out a research like a 20 page research paper on DeSci where they talked about YesNoError which was great.

00:49:40.063 --> 00:49:43.152
And then we've done a partnership with Research Hub, Brian Armstrong's company.

00:49:43.402 --> 00:49:50.063
And we're building relationships with a lot of people who are within the crypto space, but we're not part of any like specific programs.

00:49:50.262 --> 00:49:54.322
When you did the token, was that through a launchpad or Pumpfun or you just did it?

00:49:54.322 --> 00:49:54.512
Yeah.

00:49:54.512 --> 00:49:56.762
When I did the token, I just did the thing that I thought was easiest.

00:49:56.762 --> 00:50:03.643
So I just pushed it on Pumpfun, which, if you were to go back and do something differently next time, I would do it differently next time.

00:50:03.682 --> 00:50:08.682
But purpose of it in the beginning was the expectations were not that it would get to the size it is right now.

00:50:09.072 --> 00:50:13.132
I originally thought that this would be just a great example to show people what you could do.

00:50:13.512 --> 00:50:15.393
But now this is what I'm focused on.

00:50:15.793 --> 00:50:26.873
Yeah, there is an interesting trend of people launching utility tokens on Pumpfun just because it's become such a great mechanism for discovery by unmatched virality, discovery, liquidity and eyeballs to your project.

00:50:26.902 --> 00:50:28.353
It was really just the ease of use.

00:50:28.552 --> 00:50:36.862
I just knew I could fill out a form and push a button and the thing would be live and I did it like five minutes before I posted the tweet and the rest is history.

00:50:37.262 --> 00:50:50.672
And are there other DeSci projects or other AI projects that you see now in the space that are really interesting, maybe complimentary or just in parallel, but you really like what they're doing and you think it's going to be a part of this future?

00:50:51.072 --> 00:50:52.992
I really like what Research Hub is doing.

00:50:52.992 --> 00:50:54.172
I think that makes a lot of sense.

00:50:54.172 --> 00:50:55.992
I know they've been around for a while.

00:50:56.432 --> 00:51:05.663
The thing I'm looking for is people who are coming in from non crypto backgrounds to do things in crypto and are true believers in the idea of tokenizing things.

00:51:05.713 --> 00:51:13.253
Not DeSci, but one of the examples is Yohei has created Pippins, which is like an autonomous agent platform.

00:51:13.262 --> 00:51:15.103
And Yohei was the creator of Baby AGI.

00:51:15.532 --> 00:51:17.193
I wrote about him in my original article.

00:51:17.413 --> 00:51:20.543
That's someone who's coming from like Web2 to do Web3 things.

00:51:20.922 --> 00:51:22.012
So I'm interested in that.

00:51:22.012 --> 00:51:30.583
I know a lot of projects are started in crypto to look like they're doing something and then people speculate and then they're not actually doing anything.

00:51:31.003 --> 00:51:33.943
I'm obviously not excited about anyone who's doing something like that.

00:51:34.342 --> 00:51:35.043
Yeah, for sure.

00:51:35.043 --> 00:51:49.443
I think it's something that we try to tell our audience look at the background of the project, look at what they've been doing before crypto, or even if they've been doing crypto for a while, like what they've been doing in there, because the people that are working on some of the best projects here, they've been thinking about these problems for a long time.

00:51:49.782 --> 00:51:52.782
And that doesn't mean that if you're trying to start something that you're not welcome, right?

00:51:52.782 --> 00:51:59.273
Like you're welcome, but you really do have to show that you're really doing this for the right reasons because people can see through it.

00:51:59.702 --> 00:52:05.472
And especially the other founders is going to determine whether or not they want to work with you, whether they want to support you or they want to invest in you.

00:52:05.873 --> 00:52:16.432
Yep, yep, I would say that anyone out there who's thinking of making something you gotta be authentic about it, and I think that, probably you couldn't have launched YesNoError in any other way than it was launched.

00:52:16.472 --> 00:52:26.288
I think people could definitely tell in their gut that this was just something that completely happened organically and that's why it's been so successful and then you have the decision after that.

00:52:26.777 --> 00:52:38.418
Do you go all in and can you see this through and that's what we decided that we would do and so i think commitment and conviction on top of authenticity is a very powerful combo.

00:52:38.818 --> 00:52:39.407
Amazing.

00:52:39.807 --> 00:52:49.568
And Matt, do you think the scientific community and humanity is ready for what this Pandora box that you've just opened might uncover?

00:52:49.858 --> 00:52:56.182
There's been a lot of mistakes in research papers that led to impacting millions of people's lives.

00:52:56.443 --> 00:53:07.896
You think now as you feeding an AI model, all these scientific research papers, a lot of errors can start coming out and humanity is going to look back and like, Oh wow this is huge.

00:53:08.297 --> 00:53:08.536
Yeah.

00:53:08.567 --> 00:53:12.077
I think the greatest thing about DeSci and blockchain in general is you can't stop it.

00:53:12.317 --> 00:53:14.106
So it doesn't matter if you're ready, things happen.

00:53:14.427 --> 00:53:15.556
So that's one thing.

00:53:15.556 --> 00:53:16.306
And I think that's great.

00:53:16.347 --> 00:53:17.666
I think the other thing is.

00:53:18.157 --> 00:53:32.856
Let's say we go a thousand years from now, you look back in time, you say on the advent of AGI, whatever that means did somebody go use AI to point it at not a business solution and instead pointing it at a public good where they're auditing science and public information?

00:53:33.016 --> 00:53:35.347
Yeah, I think that's something that would exist.

00:53:35.746 --> 00:53:59.166
You think this even might lead to find the cure for cancer or to find certain things that have been out there, but we've just not been in the right direction due to misinformation or lack of data or for example on the model that you referenced for the paper for depth and grow, there is lack of Excel data and then there's austerity measures throughout Europe due to that, and there's no data.

00:53:59.567 --> 00:54:06.577
For us, right now, we're just focused on finding errors, and I think that it assists researchers in doing what they're focused on.

00:54:07.016 --> 00:54:10.077
So I wouldn't say that, YesNoError is not solving cancer right now.

00:54:10.106 --> 00:54:12.226
That's not something that I can claim that we're doing.

00:54:12.447 --> 00:54:15.992
But, are we helping people who are working on solving problems?

00:54:16.302 --> 00:54:16.751
Yes.

00:54:16.981 --> 00:54:20.702
Are we finding errors in papers helpful for the scientific community?

00:54:20.882 --> 00:54:21.211
Yes.

00:54:21.211 --> 00:54:22.311
Does it move things faster?

00:54:22.351 --> 00:54:22.762
Yes.

00:54:23.021 --> 00:54:25.842
Is AI becoming smarter than most people in the world?

00:54:25.931 --> 00:54:26.322
Yes.

00:54:26.342 --> 00:54:32.322
Is AI seemingly going to continue to get smarter and smarter than most people in the world to the point where it's actually smarter than everybody?

00:54:32.612 --> 00:54:33.141
Yes.

00:54:33.436 --> 00:54:35.646
That all seems very likely.

00:54:35.896 --> 00:54:46.976
Now in order to solve some of these larger problems, it's likely that AI will need to start being able to create simulations so that it can actually run scientific experiments in like simulated environments.

00:54:47.166 --> 00:54:49.007
Do I think that's going to be possible at some point?

00:54:49.137 --> 00:54:49.487
Yeah.

00:54:49.646 --> 00:54:51.246
So when do we solve cancer?

00:54:51.297 --> 00:54:51.766
I don't know.

00:54:51.806 --> 00:54:54.847
But is AI definitely the thing that will help us do that?

00:54:54.916 --> 00:54:56.027
If we ever do that?

00:54:56.277 --> 00:54:57.306
100 million percent.

00:54:57.706 --> 00:54:58.257
Absolutely.

00:54:58.257 --> 00:55:01.827
And how far do you think we are from AI starting to run those simulations?

00:55:01.867 --> 00:55:03.456
One year away, five years away?

00:55:04.056 --> 00:55:04.396
I don't know.

00:55:04.416 --> 00:55:05.336
Maybe it's decades.

00:55:05.336 --> 00:55:06.726
Maybe we're in the simulation right now.

00:55:06.757 --> 00:55:07.746
That, I'm not sure.

00:55:08.146 --> 00:55:08.847
Makes sense.

00:55:09.217 --> 00:55:12.197
And one last question to wrap up these episodes.

00:55:12.516 --> 00:55:20.507
If you could travel back in time to the time when you started in 2016 working with AI, is there anything that you'd tell that version of yourself?

00:55:20.617 --> 00:55:22.246
Spend more time coding.

00:55:22.646 --> 00:55:25.257
There's nothing more important than learning to code.

00:55:25.786 --> 00:55:27.947
I think a lot of people can say, don't go into CS.

00:55:27.956 --> 00:55:41.186
Don't learn to code because AI has gotten so big, but there's nothing more important than being able to build and people who learn to code and learn to build are going to be the people who create and they're going to be able to do it a million times better because of AI, and it's not going to cancel them out.

00:55:41.586 --> 00:55:42.556
Makes a lot of sense.

00:55:42.956 --> 00:55:49.586
And Matt, where can our listeners find out more information about yourself and what you're building and staying connected?

00:55:49.987 --> 00:55:53.496
So if you want to follow me, go to X.com/MattPRD.

00:55:53.927 --> 00:55:56.746
And if you want to learn more about YesNoError go to yesnoerror.com.

00:55:57.467 --> 00:55:58.536
We'd love to have you involved.

00:55:58.936 --> 00:55:59.396
Awesome.

00:55:59.407 --> 00:56:00.436
It was a pleasure.

00:56:00.637 --> 00:56:01.387
Thank you guys.

00:56:02.237 --> 00:56:07.387
Thanks for tuning in to the Chain Stories podcast, where disruptors become trailblazers.

00:56:07.666 --> 00:56:13.786
Don't forget to subscribe to hear more inspiring stories from those who are pushing boundaries in digital assets.

00:56:14.077 --> 00:56:19.246
Brought to you by Dropout Capital, where bold vision transforms into reality.

00:56:19.407 --> 00:56:23.757
Check out our social media links down below and visit us online at dropout.

00:56:24.047 --> 00:56:24.657
capital.

00:56:24.847 --> 00:56:30.476
And remember, the future belongs to those who dare to challenge the norm.