The Real Implications of Generative AI - Extended Quotes
What we heard in a Meeting of the Minds from Remarkable People on the forefront of the Gen AI World
Nobody is as smart as everybody. That was the saying that inspired our first Meeting of the Minds event to figure out the real implications of Generative AI in the Ferry Building in San Francisco on the evening of May 24th. We gathered an invite-only crowd of 250 people from diverse networks to help pool insights about what is really going on with the arrival of Generative AI right now.
But we chose about a dozen remarkable people from that group to give their initial thoughts on various questions in short remarks of roughly 5 minutes. What follows are some particularly insightful ideas from some of them that we edited into extended quotes out of a transcript of the 90-minute program.
You can read a shorter synthesis of what we learned in the event in an accompanying essay on The Real Implications of Generative AI, or an introduction to this whole Substack series in The Great Progression Begins. But the extended passages below were just too good not to share.
Ken Goldberg, Chair of AI and Robotics at UC Berkeley, Cofounder of a Robotics startup, yet a long-time skeptic of what AI could do:
I've been a skeptic about artificial intelligence for 50 years. People who know me know this. I've always said that we're not going to pass the Turing test anytime in my lifetime, probably. I’ve been just really very skeptical about all these different waves of AI. I've lived through a bunch of them, but this one with Generative AI I do think is different. I think that something very important has happened. I think that we are now beyond passing the Turing tests, that is, maybe we do need a few more tweaks, but we're there.
The other thing I always said was AI will never be creative. It's never going to really create anything new or interesting and it's never going to tell a funny joke or really invent anything. I now believe that is no longer the case. I believe it can do that. So we now have something that is a form of artificial creativity for the first time and I'm very excited about that because it means that it opens up all these possibilities.
Now, when I say creative, I want to just put that into perspective. It's not that we could just let it loose. We have to work with it, but if you give it two patents and ask it to come up with some new inventions that might combine elements from these two patents, ChatGPT is actually very good at synthesizing it.
You can give it multiple papers. And you say, "What are some common elements of these papers? What could some new ideas come out of them?" And it's good at that. So this is creativity. I mean, this is what we do in my lab and what we do with colleagues. We talk about ideas, put two ideas together, and we come up with something new. So that is very, very exciting.
…
About 600 years ago, everyone believed that the earth was the center of the universe. And it was about that time that Galileo looked up, saw the moons of Jupiter, and said, "Maybe we're not at the center of the universe." The church condemned him. There was a lot of pushback on this.
I think we're at a similar moment where there is a real reckoning of maybe there's another form of intelligence. I'm not afraid that it's going to take over and dominate us, but it's a different form of intelligence and I think it's incredibly interesting and it's an opportunity for us to expand our own minds, to create new things, and we are going to be able to figure this out.
Tim O’Reilly, founder of O’Reilly Media that caters to software coders, the curator of the FOO Camp unconferences, & the person who coined Web 2.0:
I want to give you a perspective on the history of computing that we don't usually talk about. Each great advance in computing has brought computers closer to human speech.
The very first computers were literally hardwired circuits. Then we got to basic and machine language. Then we started to write assembly language, which was very close to the language of the machine. Only a small number of people could do it.
Then you have a breakthrough with the personal computer and it wasn't just that personal computers became commonplace, it's that they also started to have much easier programming language such that hobbyists and individuals could start to imagine, "Hey, I can use this thing." So it was a democratization. It was coming closer to human speech.
Then we see a revolution like the graphical user interface, which again made it possible for even more people to use computers, because you didn't have to learn any arcane language, you could just start pointing and clicking.
Then suddenly you get this breakthrough with the worldwide web and it's inverted the paradigm. Instead, you have human language that is the interface, written language to be sure, and that calls up individual programs.
And guess what? This Generative AI is the next big wave. For the first time, we really are starting to have computers that have gotten smart enough that they're coming all the way towards us where we can actually speak with them in our language and they can understand it. So that's a profound shift.
I think we're just at the beginning of an astonishing new wave that really is bigger than any of the previous waves, just as each one that came along was bigger than what came before. There were hundreds of millions of PCs and everybody thought that was amazing. Then suddenly there were billions of people on the internet.
…
In a way, the story of computing is also the story of the development of a kind of hybrid. Many times when we think of AI we think of it as some kind of singular intelligence apart from us. I like to suggest that AI is kind of a massive hybrid of human and machine.
This is another way of thinking about this progress. You used to have these individual machines, they became more and more connected, but more than that, they became connected not just as machines, but as devices for harnessing and harvesting the collective intelligence of all their users.
If you think about what I called Web 2.0 back in 2004, it was really about how what came after the DotCom bust were the companies that harnessed collective intelligence. Google literally took all the knowledge that was embodied in all the documents that humans were creating and then presented it in this new way. Twitter made it real time.
Now, this GenAI is the next stage because these are large language models. They are taking all of human written text and bringing it together. So they're really an amalgam of human and machine. They're reflecting ourselves back to us.
Of course, that's an important thing you realize. They're a mirror. They're a mirror of all that's good and bad in our society, and it's really important when we think about, "Oh, we're going to fix the bias." We don't want to be fixing the mirror. We want to be fixing what it is showing us, which is us.
…
We're in this moment of an intense explosion into the next stage, which is going to be enormously larger than anything that's happened before. It is going to bring people and machines together in new ways, and we're really only at the beginning. It will be very disruptive, but I think it's also incredibly empowering and it will help us.
There's a great quote from a guy named Paul Cohen. He's a professor at University of Pittsburgh who once said, "The opportunity of AI is to help humans model and manage complex interacting systems." If you look at the challenges of the world today — climate change, economics — these are challenges of coordination.
I like to end with a quote from Hal Varian, Google's chief economist who once said about robots, "If we're lucky, the robots will arrive just in time."
Michelle Lee, former head of Amazon Web Services Machine Learning Lab, former Director of the US Patent Office, & former Under Secretary of Commerce:
Even before the announcement of ChatGPT back in November of last year, I was a believer that artificial intelligence is the most transformative technology of our generation. I was already seeing that. I was head of Amazon Web Services Machine Learning Solutions Lab, helping companies across all industries have meaningful impact.
That was before Generative AI. We were using machine learning, which is a subset of artificial intelligence that uses math and statistical processes to create models, to pore over vast amounts of data, in order to identify trends and to make predictions.
So what were we doing? We were predicting medical ailments, congestive heart failure 15 months before physical manifestation. We were accelerating drug discovery. We were predicting the occurrences of solar superstorms. This was based upon data that was not publicly available. It was at NASA, it was within companies.
So fast forward to November of last year. ChatGPT launches its large language models, Generative AI, trained on 45 terabytes of data, combing over data sources from a variety of sources, including Wikipedia, books, webpages, basically the entirety of the internet.
So now we have something very fundamentally different. These models can now do something that is quintessentially human. It's not just making narrow predictions based upon data. It's a good generalist on a vast array of topics.
When you use the generic large language models and you begin to refine and train it with extremely valuable data within organizations, I predict you are going to see incredibly powerful and impactful tools used by businesses to transform the way the businesses are done.
Right now, we have a generalist, but you combine that with proprietary knowledge and information within a company and I think that's where you'll see the unlocking of the power.
So this generative AI technology will absolutely increase all of our productivity. It will be an accelerant and an augmenter of all of our capabilities.
…
I'm also a lawyer and I'm also a former government official so I see clearly the importance of having the appropriate guardrails in place to ensure that these solutions are as fair, as responsible and as free from bias as possible.
I will say as the former under-secretary and head of the US Patent and Trademark Office, I cannot help but think: what about all those intellectual property rights? What rights did OpenAI get when it scraped the web, created these large language models that provide all these wonderful insights? What about all those creators of content, copyrighted content?
There are going to be inventions that are made by a computer and the patent office is saying no patent protection unless it's human-generated. We’ve got a lot of issues ahead of us.
I'm an optimist. We will navigate it. We have done it through many transformational technologies, but that's why convenings such as this are so important because we need all of your contributions to get it right.
Arjun Prakash, CEO & Cofounder of Distyl AI, a new Gen AI company & a longtime AI expert with more than a decade of experience at Palantir:
So there's a lot of different capabilities that this Generative AI technology has. First is its ability to interact with you in your language. We have progressively moved from talking to the computer in the computer's language to now talking to the computer in our language. And I think that's a pretty big capability shift.
A second really, really big deal is Generative AI’s ability to follow instructions. So now I can talk to the model and I can give it instructions. That's what fundamentally allows it to actually do things in a very helpful way for you. This is what makes ChatGPT so helpful for you as an assistant.
I think over the next two, three decades, the ability to think with clarity and give clear instructions much the same way you would give instructions to a new employee at your company is going to become an increasingly more relevant skill.
…
I think Generative AI has three implications on enterprises. Number one is your ability to create insights. So all of a sudden we have increased the number of people at an enterprise that can communicate with an AI model. Many people can ask questions and get answers. Historically, you had to always go through the data scientists or the machine learning engineers. That is still a very important role, but now we have democratized this in many ways.
Number two is faster time to value. Historically, if I wanted to deploy an AI application, I had to build a model from scratch. It's pretty expensive. Now I can start with something like GPT 4, which out of the box gives me the ability to get a working prototype in days, not months. That's an incredibly fast time to value.
Number three, total cost of ownership. I can now maintain just one model, a GPT 4 model for example, and instead have 200 different prompt templates for 200 different tasks. That's a game changer. I historically needed 200 different applications and models for 200 different tasks. It is significantly easier and cheaper to maintain 200 different prompt templates than 200 different models. I think this is going to be a game changer for enterprises.
Kanjun Qiu, CEO & Cofounder of Generally Intelligent, an AI research lab building general purpose agents that can be safely deployed in the real world:
If you think about what we can reasonably expect to come in the next year or two or three, I think having a highly capable Executive Assistant is going to start to be possible in the next year or maybe two. This is not very far off.
This is what we'd call quintessentially human capability, this ability to see what's going on, what the constraints are, what the goals are, and change them and push on them and try to actually get things done in the world.
…
Then in terms of the very interesting stuff that's coming later in the 2020s, we work on agents that can code, that can write software independently, and you instruct them in natural language and they can write a new interface for you and they can modify your software code base and come back to you and tell you something's impossible or tell you how to change the way that you're implementing something.
When you have software that can write software, like you have computers that can code themselves, all of a sudden computers start to look very different. Right now we need to know how to code in order to tell a computer what to do, but there is a future in which we can just talk to a computer and tell it what to do and it will change itself based on what we tell it.
Sunil Paul, a serial entrepreneur who has been CEO & Cofounder of several companies, including Brightmail, Sidecar, & Spring Free EV:
Before I say what I'm about to say, I just want to make it clear, I actually am an optimist and am excited about what's going to happen with AI. However, I've been involved in a lot of breakthroughs and a lot of changes in technology.
As I have thought about what's going on with AI and thought about the evolution of technology, we're actually in danger of making some errors in our thinking.
One is, acceleration error. We've all heard about the exponentials and the ever faster and faster pace of change. Well, that's not actually true for everything. There are lots of examples. The speed of commercial aircraft, space exploration, nuclear energy , gasoline efficiency, improvements in the lethality of things like chemical, biological, and nuclear weapons. All of these things have been slowed, because these technologies do not live on their own. They're all embedded inside our systems, our cultural systems and our institutions.
There's a second, breakthrough error — the idea that one breakthrough will immediately cause another breakthrough to happen. In the 1990s, IBM was successful in spelling out IBM with Xenon atoms. And so it was like, "Wow. We are in an age where we can control atoms. This must mean that we will have a lot more breakthroughs."
Every breakthrough does not necessarily mean that there will be another breakthrough behind it. Landing on the moon did not result in lunar cities.
Then there's a science fiction error. Culturally, we are attuned to particular ideas. The reason why ChatGPT, when you ask it, "Well, do you want to take over the world?" the reason why it says, "Yes I do." Well, that’s because it's reading the internet and the internet is filled with all these stories about AI taking over the world.
Then finally, there is the human analogy error. In other words, because these things seem like us, they chat like us, they interact with us, we can make the mistake that they are like us, that they evolve, that they have agency. They don't have agency, they do not evolve. They are agents of us. They are part of something that we are creating.
Jane Metcalfe, Cofounder of WIRED magazine, CEO & Founder of Proto.Life, a media company covering the Biological Age opening up:
Being here and seeing so many of you who I've known for 30 years is triggering a lot of thoughts about what we hoped for and what we feared back in the late eighties and early nineties. For those of you who are half my age, there's something called history, and it's important. I know they're not teaching it in school anymore, but understanding that there was an industry that came before what you're doing right now is really important.
And it actually dates back to decades ago. Mathematicians have been working on AI and physicists have been working on this and computer scientists have been working on this. This is a long conversation that's been happening for a long time and it didn't just happen in October of last year.
I would say that the dawn of the internet was the most exciting thing that had ever happened to me, but I was only 28 and we had nothing but hope and nothing but excitement and nothing but positive ideas about how this was going to change the world. We basically told ourselves that whatever's good for the internet is good for humanity. It's going to bring the whole world to your fingertips, and empower people who have no access and who have no voice. And many of the things that we dreamt about came true.
Those who worked in the mainstream media shut down our dreaming out here on the West Coast back then. We thought they were just stupid, ignorant New York Manhattanites. And that was our favorite thing to do — slam all those people who just didn't get it.
What the media has learned through the decades is that conflict sells more, gets more excitement. And so we focus on the downsides, but there's a good reason why to focus on the downside because that's what ethicists do. What could possibly go wrong is a really important question to ask. Move fast and break things might work in a computer engineering scenario, but it really does not work when it's coming down to human cells and diseases and so forth.
…
But I do want to get back to the positive vision. So no one in this room needs to tell you what can go wrong. The media is there to tell you what can go wrong. What can go right is so extraordinary and so exciting, but the narrative right now is dominated by how much money are we going to make? I don't tell that story anymore. I'm much less interested in that story about how much money we are going to make.
What I care about is what can we discover? I was hanging out with Craig Venter last night and talking about sequencing the genome. That's a piece of cake compared to understanding the human immune system, which is orders of magnitude more complex than just a series of genetic codes. There has never been a tool that could conceivably wrap all of those things touching the immune system together into one kind of a working model until now.
And the extraordinary advances and the speed of AI advance has given us the confidence to believe that we can actually start working on this. So whether it's drug discovery, whether it's figuring out the immune system, this is now within our reach. There are many, many reasons to be very excited about AI.
I want to give a thanks to those whose support makes this ambitious project possible, including this essay. Our partners Shack15 club in the Ferry Building in San Francisco and Cerebral Valley, the community of founders and builders in Gen AI. And our sponsors Autodesk, Capgemini, and Slalom who help with the resources and the networks that bring it all together. We could not do this without you.