she oversaw product teams building ChatGPT, DALL·E, Sora,
and contributed to advancements in ai safety, ethics,
And I could tell you, I did a story on Microsoft recently
and unprompted,
any number of people told me how important she was
Prior to joining opening OpenAI, she managed the product
you left OpenAI with a very generous and diplomatic note.
I know from our prep you're not gonna be talking a lot about
Can you tell us anything about what you're up to next?
I'm not going to share much about what I'm doing next
And yeah, generally I would totally ignore the noise
and obsession about who is leaving the labs and so on
and let's focus on the actual substance of things.
But what I'm excited about is, you know, quite similar
And I really think that we're sort of at this beginning
how our civilization co-evolve with the development
of science and technology as our knowledge deepens.
It was a company that where people had a shared vision,
humanity really was on this quest to take, you know,
and before that we had college level performance.
And before that, just a couple of years
before that we had high school level performance.
that has a capability to learn how to perform at human level
even if it's not something that happens within a couple
and we believed in this, what you call spiritual mission
Whereas now we've made enough progress that we can kind
for how AI would really advanced transportation.
and particularly how it would change our relationship
in exploring virtual reality, augmented reality,
and it was his essay on Singularity where he talks about,
you know, this is sort of likens our era
to a time where, where the change is so transformational
it would be the most important thing that I would do.
You know, you mentioned, you know, the VR company you work
or if this conference were taking place like six years ago,
all people would be talking about was the metaverse, right,
and I don't think any
of the sessions here are about the metaverse, you know,
that I thought it would happen in that particular time.
I was more curious
to understand this next human-machine interface
and augmented reality are have definitely advanced a lot,
since then.
And yeah, I think we will definitely see great technologies,
and they're somehow saying that at this moment, you know,
which was, you know, kind of like an astounding leap.
So I think one interesting observation is that people get,
to come in our society's ability to adapt to more change.
But in terms of whether there is a plateau
or not, let's consider where the progress came from.
And a lot of the progress today has come from, you know,
increasing the size of the neural networks,
increasing the amount of data, increasing the amount
that as you increase all of these things predictably leads
of different data code and images and video and so on.
So we've seen a lot of advancement coming from that.
we're just starting to see the rise of more agentic systems.
So I expect there is going to be a lot of progress there.
but I, I'm quite optimistic that the progress will continue.
people are exploring things like synthetic data
this year companies are spending a billion dollars
and next year that goes up by a factor of 10 to 10 billion
to AGI level systems is not just about capability
It's about figuring out the entire social infrastructure
in which these systems are going to be operated in.
because this technology is not intrinsically good or bad,
they got more excited about the building AGI part
I think kind of like the market dynamics have pushed
everyone in the industry to really innovate in that vector.
I would say that civilization needs to coexist harmoniously
Some people have said that the as existential threats
because people aren't really building the safety stuff now,
market alignment on the short term safety questions
And so I think a lot of effort will actually is already
in the understanding of what these systems are capable of,
why we haven't been able to get rid of the Hallucinations?
something like, These things are always tied together
and you almost cannot distinguish,
it's impossible to distinguish the lion from the lamb.
And I think hallucinations are like that where it gives you,
where you need very accurate information in, you know,
But it's still something that we need to figure out.
Some people have suggested, you know, you talked earlier
But it seems to me that the more we go down this path,
the more valuable, the trustworthy information is,
that talked about that if models are trained on, you know,
which seems to put a premium on like human created
You know, it winds up to be some sort of licensing things
for the best, most trustworthy models,
which then sort of, I guess limits its world models.
How are we going to eventually deal with this IP issue
There is the aspect of, you know, how the laws evolve
and figuring out and innovating perhaps in business models
and understanding, doing more research
and understanding how specific data contribution
And another layer is definitely the research on the data
like our reinforcement learning with human feedback
or you're doing reinforcement learning from AI feedback,
and requires a lot of human feedback or synthetic data.
which can match and exceed some human capabilities
and how civilization co-evolves with this technology.
I think it's entirely up to us, the institutions,
the structures we put into place, the level of investment,
the work that we do,
and really how we move forward the entire ecosystem.
and constrain the actions of any specific individuals.
or individual to bring AGI to the entire civilization.