The Many Positive Possibilities of Generative AI - Extended Quotes
The Positive Reframe of the Narrative Around Gen AI in the Words of a Dozen Remarkable People on the Forefront of this New Technology
We’ve heard plenty from the handful of so-called tech experts who are claiming that Generative AI is leading us down a path toward human extinction. Or from economists who are warning of mass layoffs as intelligent machines take away all our jobs. Or from politicians or pundits or activists who pile up the list of horrors that this nascent technology undoubtedly will unleash on us at some vague time in the future.
Set that all aside, take a deep breath, and read through this alternative narrative about all the many, many positive possibilities that Generative AI could bring instead. We gathered a lineup of more than a dozen remarkable innovators on the forefront of developing or applying this powerful new tool and asked them to talk about the potential positive impact on our economy and society in the next 25 years.
Each person got 5 minutes to give their insights to a gathering of 200 or so other innovators in the Shack15 club in the iconic Ferry Building in San Francisco in June, just six months after Generative AI burst onto the scene. What follows are some of the best passages in those short talks.
Take the time to read them through and think deeply about what they each are saying at this critical juncture in time. People need to hear both sides of the story of the arrival of Generative AI and the beginning of the AI Age. We know the negative side. Now hear out the positive one:
Greg Corrado, Cofounder of the Google Brain team, now leads Google’s Health Research & Innovations Division:
I feel like there hasn't been a moment that I've been alive where there was a larger space of possibilities that had just come up over the horizon than have come up in the last year or so.
And I do think that there's reason for optimism. There's also reason for concern, but the optimism for me is that I believe that we're on the precipice of an explosive expansion in human capabilities that these technologies are going to bring to individuals in a very democratized way — capabilities and powers that seem unimaginable.
…
There are two phases of this revolution, and I think that we're at the beginning of the second phase. The first phase of this revolution was fundamentally about teaching machines to recognize patterns. The term of art is supervised learning where you give examples of a pattern with a label and the machines can then imitate that pattern, they can recover it…
And that's given us computers that can recognize speech, they can recognize images, they can drive cars. That recognition capability is actually a huge fraction of human intelligence.
But the generative AI revolution, the thing that's happening right now, which I've had the privilege of having a front row seat to, it's not about pattern recognition, it's about pattern completion…
I think it's good for us to understand, to be humble, that so much of what we think of as human intelligence and human capabilities is pattern recognition and pattern completion.
…
I have a five-year-old and I told him a few days ago, "There are going to be a lot more robots around when you grow up." And I really expect that in some sense, the number of interactions that we have with artificial agents that feel intelligent in a useful way by, let's say, 2035, I think it's going to outnumber the number of human minds that we have. But they're here, I believe, fundamentally to augment our abilities and expand our reach.
…
I've been working in this space of applying artificial intelligence in healthcare and in biology. And the reason that I do that is because I think that's the greatest opportunity that these technologies have to provide tangible human benefit in the near term.
I don't think that we can provide humane healthcare to the global population or that we can keep up with the rate of change in our environment, the rate of evolution in pathogens without AI.
We need AI if we're going to realize personalized medicine, if we're going to have the capability to provide care for people… Healthcare workers are suffering and they're inundated. Even before Covid, they were inundated. And we need to use AI as care multipliers, and we need to find ways of understanding our biology in a whole new way.
…
I think that the generative AI revolution is going to add a whole new layer in terms of what's possible in terms of drug discovery, in terms of protein design, in terms of understanding how we interact with our environment, how we mitigate climate change, how we adapt to climate change.
We have to try. It would be insane to look at this kind of technology, this kind of opportunity and say, "Well, I'm too scared." So please try.
Linda Avey, Cofounder of 23andMe, Founding Partner of Humain Ventures, an AI Fund focused on Health:
What can we learn from recent history? I always think back to the Asilomar conference that went on when we were first talking about genetic engineering. Paul Berg and Maxine Singer put together a meeting with about 140 people from the media, lawyers, ethicists, and they sort of put up their own guardrails.
And I think that is such a great example of what we can do in the industry to build trust with the community and with the regulators because the regulators don't know this industry. We know the industry. We need the experts in the field to be weighing in on how AI can be such a positive thing.
Peter Schwartz, Chief Futures Officer for Salesforce, Right Brain of Marc Benioff, Master in Strategic Foresight:
How many of you have an executive assistant? I have two. And what do they do? They handle complexity for me. They take complexity out of my life. The world has become very, very complex, managing many, many tasks simultaneously. And a very good executive assistant solves all of that so you can focus on the things that really matter and that's what you will all have very shortly. And by very shortly, I mean weeks. I don't mean years, I mean now.
…
There are half the kids in the country who are in the lower half of the class. They didn't do very well in school and they struggle in life these days because the world is becoming incredibly complex. And they don't understand computers and they don't understand AI and they don't understand all the things around them on how they navigate the world.
One of the things that we've learned about education is that personal tutoring makes a big difference. If you have a personal tutor, you learn a hell of a lot more.
Every kid will now have a personal tutor from the beginning of their learning. Every kid. The lower half of the class will now have the ability to learn math, history, geography, all of that. And every kid will now be above average just like in Lake Wobegon.
And so we're going to see a remarkable transformation of education as the experience in education is highly personalized and unique to every single child going through school.
Daphne Koller, Cofounder of Coursera, the online education platform, former Stanford Computer Science Prof:
If we can actually offer a personalized education, we can get people to be in the upper half of the distribution. Turns out someone actually already did that experiment about 50 years ago. He was called Bloom. And there's this thing called Bloom's law that shows that if you do actually offer a personalized education to even the less successful students at the beginning, they are two standard deviations away from the means.
For those of you who don't remember your Gaussian distributions, the two standard deviations means you're in the top 5%, not the top half, but the top 5% of the distribution. So imagine what if everyone was actually at the top 5% of distribution because we could offer them a personalized learning experience.
…
So a really good tutor understands what it is that their students do not understand, and they're able to figure that out and create an explanation that is tailored to the lack of comprehension of an individual student.
And I think for that, we need to actually create systems that are trained using reinforcement learning on not just any people, but on students who are in the process of learning, because that's what really good tutors do.
They understand and they learn to understand where the common misconceptions, the common failures of understanding are. And I think that's a really great challenge for the current AI systems.
…
And then there’s the value of human engagement. We all know the difference between going to the gym on our own, even with the best exercise videos, and going when there's a personalized human trainer who's there, or going even with a friend to whom we feel accountable and beholden to hold up our own.
And so I think that's a question of at what point will we believe that our AI systems are a replacement for a human, or do we still need to have a human teacher involved at least in some ways in the learning process, or a human friend? Because ultimately, learning is, for many people, a social endeavor.
Kevin Kelly, Founding Executive Editor of WIRED & current Senior Maverick:
I think the shape of the next several decades is going to be hugely increasing uncertainty in all dimensions. And part of it is because AI is an enabling technology and it's going to enable uncertainty in almost all other realms. And there's lots of other things that are happening in the world too that are also increasing our uncertainty.
The reason why there's lots of very, very smart people with all kinds of strong opinions about AI that are contradictory is because it's so, so uncertain. And I think what we can expect in the next couple decades is even more uncertainty about what it is that we're doing and why we're doing it and what we're trying to get out of it and whether we succeed.
…
One of my arguments about the AI doomers is that they really overrate the idea of intelligence thinking that smart things and smart entities trump everything else. But you put Einstein and a tiger in a cage — and who wins? It's not the smartest one that wins, it's the one who can do the most in the world.
I think what we're going to see with AI is a really long delay in its effect on us. So we'll be wowed by all these things that have intelligence, but I think we're going to have a very, very delayed impact on the world. It may take decades for it.
Self-driving cars is one example. It's going to take a long time for really autonomous self-driving cars to come because you can't just put a brain in the car. You have to change the infrastructure and everything else around it.
…
Already there's different species of AIs that we've made. And the one that we're most impressed by right now is a very, very particular kind of species of AI, the LLM, the transformer neural nets. And it's only one variety of the many kinds of possible minds that we can make.
And this particular variety is really interesting because it's been trained on the bulk of human creation. And so it is in some ways like an amplified version of the most average human there is. And the most average human is slightly racist, sexist, and mean. And so that's what it kind of delivers to us.
But the thing about it is that we're not going to be satisfied with that level of human. We want our AIs to be better than us. We're demanding that they be better than us even though they're kind of a mirror right now. And that's okay, we can program that in.
But the challenge is that we don't know what better than us looks like. We don't know how to be better. Is that woke? Is that post-woke? Is that super woke? What does that even mean … I think in the end that they're going to make us better humans.
Anton Troynikov, Founder of AI Startup Chroma, former Research Engineer at Facebook:
We are a small team, but we are a small team punching above our weight because we use AI tooling as part of what we build. Personally, my productivity as a programmer has increased probably two, three, four, five fold.
Why? Because I can distill the complexity of any given programming problem that I'm working with down to a simple natural language question, and then receive information that I can then quickly iterate on in tandem with artificial intelligence with these systems.
…
Consider how many more businesses can now be started, how many more companies can be founded because you no longer need a team of dozens of people to build all these systems for your company. You can simply have them and import them and instead work on the problem that exists out in the world, not in your business as the thing that you as a founder are actually focused on.
How many more people will have access to that now who never would've had access to it before without raising millions of dollars for venture capital? That's an incredible capability. That's at least on par with the web and the PC where ordinary people had the ability to publish to millions of people around the world.
…
The other piece that makes this technology so difficult to think about in a positive light is when you have a general purpose technology, it's very difficult to predict the second and third order effects.
So let's try to take a look at what it actually means to have a system which can individually teach a person any subject. Let's think about that on a global scale. So of course we're going to improve educational outcomes for children. That's great. However, think about this as a species-level capability.
How much faster are we as a species able to deal with crises when we have a cognitive augmentation that any person can pick up? Imagine if in the next pandemic, rather than spending months trying to sequence the virus and tell people that we need to sanitize surfaces rather than trying to deal with it as aerosolized, we instantly could spin up thousands of people to a PhD level understanding on this specific virus and figure out exactly what we need to do.
AI doesn't generate crises. It increases our ability to deal with them as a species. And this is a capability that has existed only for the first time in history.
Brad DeLong, Economic History Professor at UC Berkeley, Author of Slouching Towards Utopia:
Individually for each of us, this wave now breaking over us is wonderful — is magnificent. Let me put things in some perspective: Let me remember back to the days of Grace Hopper, Alan Turing and Johnny von Neumann and all that has happened since.
The mainframe gave me back when I was a graduate student, more computational power at my disposal than Richard Feynman had back in the days when he was working for the Manhattan Project and had 100 computers to boss. That is 100 women with BAs and calculators.
The PC gave me, as an assistant professor, the equivalent of a typing pool plus a professional editor, plus a graphic draftsman. The internet gave me the equivalent of a full-time 24/7 runner to the biggest library in the world, plus more. Then the smartphone, now genuinely useful machine learning.
Each and every one of those things has made a quarter of my job so easy to accomplish that I can do five times as much of it, thus doubling what I can do personally, and they overlap. So by now I have had four doublings of what I can do work-wise since I turned 18.
And this wave is certainly a fifth and maybe the biggest of the five. Genuinely useful machine learning is a huge deal.
Individually, this is a huge productivity bonanza. Figure for 20% of the American workforce, it will come close to doubling what they can do individually.
Dave Fontenot, Cofounder of HF0, a Residency for Repeat Startup Founders, Longtime Hackathon Organizer:
So I was in New York last weekend and I asked everyone at this conference, they were all interested in AI. I asked them, "Raise your hand if you want to invest in AI." And then I asked them to keep their hand up if they've been to San Francisco in the last six months, and everyone's hand went down.
And I told them, "Y'all don't even know." Because the type of stuff that's happening in the neighborhood is insane. I've never seen anything like this.
…
This neighborhood, let me tell you all, it is popping right now. It's crazy. There'll be a research paper that comes out on Friday morning at Stanford. By Friday night at dinner, the researcher is at our house for dinner and we have a bunch of friends, a bunch of different AI founders at the house, all asking them questions, all talking about this new paper.
By Sunday, it's implemented in a third of their startups and it's in production and everything, ready to go to their customers on Monday. That's the type of stuff that we're seeing in the neighborhood in San Francisco.
And I've organized a lot of hackathons, probably more than just about anyone except major league hacking and stuff, and I've never seen anything like this. It's really, really crazy.
Mickey Friedman, Cofounder of Flair AI, Previous Winner of an AI Grant, some work at Tesla:
I'm one of the co-founders and CEO of Flair AI. Flair is an AI-powered design tool. We combine AI with a built-in control layer to effortlessly stage product photo shoots digitally in seconds.
We launched only three months ago, but have on-boarded over half a million users, and our goal is to reduce the cost of the coordination of producing product photo shoots from thousands of dollars to a $10 monthly subscription.
Mike Haley, Senior Vice President Leading Research at Autodesk, background in Machine Intelligence:
The real world is complicated. Design and making things happens in the real world, it's hard. And humans, while we have incredible capacity to think and imagine and be creative, we're essentially biased dimensionality reducers. We can't handle that level of complexity. We have biases that make us think we can, but we actually can't. So there's a lot of problems to be solved in the world that I believe generative AI is going to start solving.
…
Who has ever used design software to design something in the world? Was it easy? No, it wasn't. Well, it's because design software is built to solve complex problems in this complex world. It's professional software; it's hard to use. And what do we use when we try to use that software? We use a trackpad, a mouse, a keyboard, maybe an electric pencil if you're lucky.
We've got these huge, complicated ideas in our head, and this immensely complicated world that we're trying to model, and we're trying to compress it all down into something that we do on a keyboard and with a mouse.
It's insane. But we've gotten used to it. I've never known anything different. That's going to change with generative AI, the ability for the systems to understand us at language level, at gesture level, at image level, at sketch level, at idea level. This is going to allow designers and creatives to start working at a pace and an ease that we've never seen before.
And what this means is this technology becomes more accessible as a result. This is super exciting. We need more good design in the world. We need people to be able to think of the complexity that sits in the world and create designs that respond to that.
People sometimes come to me and say, "But what if AI takes all the design jobs?" I say: "There's an infinite number of design jobs out there." These problems are so hard that this is going to allow us to solve them. So I'm super-excited about the ability of this kind of technology to solve problems.
Carrie Hernandez, Cofounder & CEO of Rebel Space Technologies, formerly of SpaceX and the U.S. Military:
Space is really hard. Don't let SpaceX fool you with the launches, it's still really hard. So you need AI to make those ambitious goals in space happen. It's really important.
…
How do we get to the moon? I have serious conversations, real ones on a day-to-day basis, about the lunar internet, about asteroid mining. And if you think that we're doing that without AI, I would say you're crazy. I don't think there's really a path that gets us there without it.
At my company, at Rebel Space, we're focused heavily in the communications and security realm. We work with NASA quite a bit on this leading edge of how do you apply AI to communications?
Once you send out some space systems into space, it's not talking to you right away. It's got to make its own decision. It's got to figure out who it can talk to, how to bring that data back, how to keep what it's trying to do working without failing. This is really critical to everything that NASA and the commercial space market needs.
How do you keep those space systems from getting hacked? How do you give it the independence and the ability to detect when something is wrong, whether that's malicious or when something is just simply configured wrong, people make mistakes, environments change.
…
Going forward, as we start to look at mining and extracting resources from different parts of the solar system, it becomes even more critical because you need AI autonomous functionality to make those decisions real-time.
You're sending out something, you want to bring back resources, you want to help build better batteries for climate change. You're trying to bring back water to the moon, whatever it is, it's not a human, it's the Star Trek version of reality where there's a person telling a computer to help figure it out because you're not doing that math and you're not making those decisions.
So I think that despite all the concern over where AI is headed from the ChatGPT and LLMs, it's really going to be critical.
Marc Oost, Global Leader of AI, Analytics, & Data Science for Capgemini, global tech firm based in Paris:
The only issue — and what we also should always keep in mind — is localization. Europe is not the U.S. ChatGPT is very good in English. It's not so good in Dutch, I can tell you. I speak Dutch. And other languages as well.
And what we see happening more and more is that new LLMs are being created also within Europe, new Generative AI is popping up there. And I think, when that starts to become more mature, we’ll see a democratization of basically all knowledge across the world.
And if we start combining these LLMs like ChatGPT, but also these European versions, we can basically create one new big knowledge base that everybody can start using. And that I think is one of the most important things to come.
I want to recognize those whose support makes this ambitious project by Reinvent Futures possible, including this article. Our partners Shack15 club in the Ferry Building in San Francisco and Cerebral Valley, the community of founders and builders in Generative AI. And our sponsors Autodesk, Slalom, Capgemini, and White & Case who help with the resources and the networks that bring it all together. We could not do this without you. Thanks.
Good stuff! I especially liked Kevin Kelly's: "it is in some ways like an amplified version of the most average human there is." And the follow-up that we want AI to be better than us and that means figuring out what that means.
Another favorite: "AI doesn't generate crises. It increases our ability to deal with them as a species. And this is a capability that has existed only for the first time in history."