Q&A with OpenAI's Sam Altman at Station F

This afternoon I was attending a Q&A; session with Sam Altman, organized by Station F, the big startup incubator in Paris. I think it was part of the tour he's currently doing, visiting several countries and talking to governments and companies, mainly about regulation around ChatGPT and LLMs.

It went nicely, with an intro by the French ministry for digital affairs, a series of questions by the session host, and a big series of questions from the public.

Here's my take-away points, and summary with comments below.

TL;DR:

  • he's good, and he gracefully went through it: his answers were on the whole quite clear, and he didn't fall into trap questions
  • on the flip side, he was usually very neutral and politically correct, always trying to be comforting and optimistic
  • there were a lot of interesting questions
  • no deep insights and big announcements (which was to be expected)

Questions from the host

Q: you're on a tour of several countries, how does France measure up when it comes to AI?

A: very well, thinking in advance, trying to strike the right balance; in terms of engineering a lot of nice stuff here

Q: what are some concrete applications that surprised you?

A: AI can do a lot, not just one thing; I'm excited about education, apps using our API
productivity gains, first with coders, if we could do this for scientists it would be great

Q: what are challenges to adoption?

A: it was adopted pretty fast! but we have to avoid negative impacts, the conversation is too much focused on the negative

Q: what does the world look if the tech goes very wrong?

A: (not very clear, but he's pretty optimistic)

Q: regulation, how is it going?

A: it's important, the conversation is productive
the rate of change of the tech matters for this question, since it's going very fast

Q: what's next for OpenAI?

A: better models, faster and cheaper, it's what users want
they want to figure how to make it safe

Questions from the audience

Q: OpenAI accused of regulatory capture, at the same time the tech can go bad pretty fast

A: we deserve the scrutiny in our position
we want to keep on talking about safety

Q: FAIR (Meta research lab) just released their competitor of Whisper; when it comes to competition, what part of it is driven by ego vs general good?

A: part of it is ego for sure; as long as we're not competing around safety it's fine

Q: how long until ChatGPT can read written text (? not sure I understood)

A: not long

Q: are you discovering new safety challenges that you were not anticipating, with jail break?

A: we do find new jailbreaks, it's one of the reasons we released the model publicly, to have wider scrutiny and be surprised

Q: ?? (something like: with ChatGPT, are you not afraid that work is over, that people will not want to work anymore?)

A: if you give humans better tools, they do more things, not less
I don't think people won't work anymore, maybe some will choose not to work
we execute at higher levels and expectations raise with better tools

Q: biases in the education system; what are you doing to cope with them when training models?

A: I was very concerned for a long time, now not so much
good surprise: in recent papers it seems less biased than human
I think we'll find out that they are less and less over time

Q: what about the coexistence of different models in the ecosystem, sovereign models...

A: it's a fundamental enabling technology of humanity
it's good that there are more than just ours
they allow people to do different things, competition is good

Q: (from a high-school student) how ChatGPT is going to change college and the relationship to teachers?

A: the rate of change is going be much higher
it's like math teachers and calculators: use calculators and hold students to a higher level of expectations

Q: how do you use such tools in your own life? especially around Product Management work?

A: I don't do the PM work anymore sadly
me personally: translation recently, also when I get stuck writing something that help draft the first paragraph to get into the workflow
(I found his answer a bit disapointing)

Q: main goal of OpenAI: prevent misalignment of AI, can you give an example where you sacrificed revenue for this?

A: our mission is broader than that
a big part is getting a benefit to society
for instance we don't do models for adult content, it could maximise profits; many other examples, that's one
(same, I found his answer a bit disapointing and wtf)

Q: what are you reading these days?

A: no time to read the last six months, can you recommend something?
_(the guy said scifi books, but the positive ones, not the dystopian ones)__

Q: do you think it's really possible to regulate AI on a global scale?

A: I think it's impossible to stop the proliferation of smaller and weaker models, and it's fine
I believe the most powerful models are capable of more harm, maybe existantial threat even
we're probably going to see some instability in the world due to this, but if we work very hard together it should be ok
(quite neutral answer here)

Q: (from an arts student) how do you deal with copyright issues?

A: we want these models to be reasoning engines on databases, but right now they can also regurgitate content
if it can point to copyrighted content, there are many ways to compensate copyright holders
we don't want to store content, we want to build a system that points to content

Q: what's some regulation that might hinder the development of ChatGPT?

A: licensing framework and safety makes total sense, I'm ok with it
the ones which would impose 100% safety on something would maybe pose problem
(not very clear on this)

Q: ?? (maybe what are other areas of progress that you watch? couldn't hear clearly)

A: we should accelerate progress on fusion and other energy
I think we're close to major tech breakthroughs in this space
there's no need to drag AI everywhere

Q: about open source models?

A: our mission is maximize benefits of AGI for the world
we cheer for open source models, we have some of our own

Q: advice for creating a productive team?

A: we created something great because we're small
median talent in the team is incredible
you have to increase talent density (something I've heard when I was working at Alan, it basically means always hiring better people than you already have, and letting go people performing less)
most research labs hire people that are reasonably good, we hire exceptional people
our mission was misunderstood for years, now everyone is talking about AGI
we have a culture of sweating out every detail
you need to think about the entire stack: tech, interface, users, regulation...
in the end you just have to do it, failure and ridiculous ideas are tolerated in the Silicon Valley, if it's not possible then people don't shoot for ambitious ideas

Q: (question from Louis Dreyfus, director of newspaper Le Monde) publishers, they see this tech as a threat: I pay journalists to produce good content that I sell, you can use your model to produce infinite content for free, what's going to be my business model in the future?

A: I don't think great newspapers will be replicated by AI anytime soon
your journalists should use it to work better
there's something deep about human taste, deciding what to write about
but yeah, you will have to adapt
(re-reading this, I have to say that his tone made his answer sound less lofty than here in writing)

Q: why not make ChatGPT free for students?

A: when you have more compute, I'll be happy to offer student plans
right now we have trouble servicing our customers correctly

Q: you're doing a world tour, do you have plans to open other offices?

A: we will open other offices, but not many and slowly
we still believe in in-person work
hybrid is probably not the best for this kind of research, we're a different company
we like Europe, we want to make sure the regulation works for us

Q: (from a guy at Snap) ?? (not understood)

A: it's nice to see a company as big a Snap move that fast
I hope we can to the same at OpenAI

Q: personal computing freedom started 50 years, in the AI space we have two directions, big or personalized models, what are you choosing?

A: both have an important role, we're interested in both
same for open source

Q: emerging markets: how can we leapfrog and skip costly mistakes?

A: this is a tech that's going to help the developing world more
the cost is going to equalize, everyone will have access to sophisticated cognitive services
developing economies are embracing this tech very fast

Q: AI and web3 in the long perspective

A: the way that I think: can you build something that's going to be helpful, do you build something great that people will use?
I don't have a very deep answer

Q: three main dangers of AI according to you?

A: it's human to have fear about that
honest answer: I cannot articulate my three answers
societies will use it in different ways
fears might not be warranted
example about medical access, obvious area for risks, however there could much more benefits than problems
other example about Replika (virtual friends / girlfriends), it sounds low risk, maybe it will make use mad at each other like social networks did (? not sure here, couldn't hear well)
so I can point to the obvious risks, but it's not very useful
we cannot predict the major risks and be confident that we're right

Q: what about disinformation?

A: after training GPT-4 we spent 8 months doing safety testing and audits, while we were under pressure by users to release
future models may take longer
we ask for patience and support
let societies decide what there confortable with

By @Clément Chastagnol in
Tags :