This past month, OpenAI’s CEO and founder Sam Altman, along with some of the team, have been traveling across the world to 16 cities to have audience with scientists and politicians and everyday people. It’s been what they termed the ‘OpenAI World Tour’, an effort to escape the bubble of Silicon Valley and have conversations with the world on how Chat-GPT was being received, and what it could become.
Singapore was one of OpenAI’s last stops. During the fireside chat, which took place at the SMU Law School on a Tuesday afternoon, Altman mentioned that he’d met with PM Lee and DPM Wong that morning. The crowd he spoke to now was about 1,000 strong. Being a bit late, I was seated in the overflow room outside, but even over a livestream I was struck by the sincerity and patience he seemed to have.
The Chief Executive of IMDA, as the moderator, asked him a few questions to start – what he’d seen and spoke on during his previous stops, OpenAI’s mission, and issues surrounding diversity and ethics. Shortly after this he opened questions to the floor, which was then largely dominated by AI start-up founders. I’m summarising some of my main takeaways from the hour-long talk below.
On OpenAI and Altman
OpenAI has always been interested in artificial general intelligence (AGI), but although Chat-GPT looks to be very ‘general’ and ‘intelligent’, they don’t consider Chat-GPT to be an AGI. Altman has previously described AGI as something that can identify and solve its own problems, but Chat-GPT is just a tool that is good at processing and producing text and language.
OpenAI wants to incorporate the “collective preferences of humanity” into near-future versions of Chat-GPT. They’re interested in putting all languages, cultural practices, values and beliefs into Chat-GPT, to ensure that everyone’s backgrounds are reflected. OpenAI doesn’t claim to filter out the ‘bad’ and leave only the ‘good’, but it’ll be an interesting problem to see how they might keep toxicity out of Chat-GPT in everyday use (as other chatbots have failed to do before).
Altman believes we’re in the “slow takeoff” timeline regarding AGI, and that we’ll reach true AGI in 50-60 years. He regards this as a good thing, as opposed to the world in which AGI is released faster than people can react to it. (I imagine most people do too.)
Altman maintains that AI should be used as a tool, and never be used as a decision-maker. But tools can make things exponentially more efficient, which can change the nature of what we do.
Other interesting ideas
Usefulness of technology is entirely mediated by its interface. Chat-GPT has received such widespread attention not because of its power (large language models have existed for a short while), but because of how accessible and intuitive the interface is. The fact that it’s text-based and reads like a messaging app means that users barely need to learn anything to use it. Interfaces don’t just affect how users practically interact with a product, they also affect the user’s imagination in using a product – just compare your perception of plain notepaper versus lined notepaper.
This is linked to the previous point: When things that used to be hard become much easier via technology, at some point we reach a qualitative change from the past. Computer programmers in the past had to use punch cards, which forced programs to be a lot smaller and simpler. As we’ve moved to programming with digital code, and increased computer power, the type of programs that exist get to be far more complex. AI like Chat-GPT might make it possible in the near future for anyone to create an app just by speaking it into existence, and at that point the complexity of the products we can make increases even more.
No amount of fine-tuning in a lab for safety can stop humans from breaking a technology or using it in devious ways. Humans are infinitely devious, they WILL break something if they want to. And sometimes the exploit that you feared the most doesn’t end up being a problem. But that doesn’t mean you stop caring about safety and bug-fixing!
It’s arrogant to think that you or your company alone can produce the perfect product without any outside input. It may be more productive, and also safer, to release your pre-perfect products to the world and invite conversation and responses from other experts.
In response to an audience member’s question, Altman mentioned that “music of the future is gonna be way better.” Do better tools make better art? I think it’s the case that tools will and do raise the bottom threshold. Stable diffusion is making it such that an artist can’t just draw anime girls in a stereotypical style. It’s pushing artists to be more intentional, more creative. But we still appreciate and exalt really old art and music from hundreds of years ago. I don’t think having better tools will touch the best of art, it just means that the future best of art will be produced via new tools.
Human intelligence is not necessarily special from artificial intelligence, but humans do come pre-programmed with a deep interest in other humans – their thoughts, feelings, things they produce. When the AI DeepBlue beat Garry Kasparov at chess those years back, everyone thought chess was done for, solved, boring. Fast-forward to now and chess has never been more popular, people have never been better at chess, but no one’s watching two AI battle each other. Similarly, I think AI can create aesthetically good things, but I don’t think humans are going to want to admire AI art compared to human art.
Just as there is an international marketplace for commercial products and services, there’s an international ‘marketplace’ for regulation and policy. There’s no explicit buying or selling, but a lot of watching and adopting. Italy chose to ban Chat-GPT over privacy concerns, how will that work out? During COVID, Sweden chose not to enforce lockdown for a long time, how did that work out? Singapore chose to be the first and only country in the world to allow lab-grown meat to be sold, how will that work out? One might think that countries develop their policies largely independently, based off their population and own political agenda, but this isn’t actually always the case. Countries run their little experiments, and other countries watch to see what happens, before adjusting their own policy accordingly. Much of international AI policy is likely to develop in this way of little experiments and lots of watching around.
Altman said that growth is “clearly what the world wants”, because the market keeps consuming and rewarding growth. I think there’s a lot to unpack here. I wish this talk had more discussion about ethics in AI, but OpenAI is ultimately an AI company and not an AI ethics company. I’m looking forward to conversations on AI with more philosophers and social scientists.

Leave a comment