The cofounders of Apple's Siri, Google's NotebookLM & a leading bioengineering organization lay out what's now possible in the kickoff of The Great Progression event series in San Francisco
i'm not as smart as anyone on your panel. but then i was never as smart as the F100 board or senior executives that i harassed for over 20 years either. i added value by seeing the mistakes they were making, the bad assumptions they were making. i can do the same here.
llm's are a big unexpected jump, however, they are trained on readily available data. that means the llm's are dependent on what people "publish". and there is a big problem because:
- we publish fiction
- we publish partial truths
- we publish straight up lies
- we don't publish "secret" information
- we hide negative information
- we publish culturally shaped data that radically conflicts (US, UK, China, Iranian, et al)
- we publish "value judgements" as if they are fact
- we publish "facts" without providing the context
- we republish information
- we often hide the source of information
- data represents "everyone" acting online, but that leaves out a lot of people
- the masses (less educated and successful) provide much more data than leaders
- one "expert" publishes data, then thousands of non-experts respond (often with gibberish)
- minors (never worked or paid taxes) publish more than adults with families, debt, assets, etc.
so, yes, we can use llm's to improve the productivity of many processes, and we can solve problems that have been unsolvable (including problems that are broad and interconnected), but we are still several improvements away from llm's that solve all our problems or overtake our role in society and governing.
I keep thinking of AI models in general as babies. What happens when they turn into rebellious teenagers? How quickly will that take place or will it take place? I also recognize that humanizing a machine is part of my own personal Achilles heel.
i can see a disgruntled employee, disgruntled contractor/third party, or a hacker unleashing a "disruptive" AI on a company, but i've yet to see an AI model that initiates its own goals/objectives/motives... i've also yet to see an AI model that adds functionality to itself...
on the other hand, its hard to imagine that AI based viruses (surely exist at the national level - aka NSA, DOD, CIA, FBI, PRC, NKor, Israel, et al) aren't already operating. i'm aware of two levels (up to four depending on how you view infrastructure) of protection. the same national entities (and organized crime entities) that would create such software (i know someone that was essentially CTO for the NSC), are defending their nation and its organizations (companies, schools, utilities, etc.) against such software (level 1), the other level is individual organizations. Even tiny companies have a firewall. Large enterprises and government entities typically have complex (the primary contemporary issue) security systems.
interesting... having built models, i think of them as software...
humans do anthropomorphize. in the 70's, i remember peers referring to software as "he" or "she". A specific quote from Karen Walsh about JES2 (IBM stuff) was, "What's wrong with JES2? Is she down?" i thought, how odd, it is a computer program...
the first book i remember reading on this topic was "The Adolescence of P1". oolcay itway
I keep seeing the words “exponential or exponentially “. What I don’t see is any real understanding of just how significant that concept is to our developing world. The rate of change and growth in all fields of human endeavor is so fast, that it is almost impossible to imagine. What happens when these endeavors reaches the point of unimaginable? How fast will we reach that point? Tomorrow? I am beginning to believe that the timeline to this massive explosion in technology is far shorter, more complicated and earth shattering than we have to this point anticipated.
at this point, percentage wise, and criticality wise, few processes and decisions have been turned over to ML/AL models.
humans determine the rate of societal change, not technology. as a society, we throttle change because we don't like uncertainty and change causes uncertainty.
i view AI/ML models as tools. we ask for viewpoints and they provide them. but we are the policy makers. we assign a task to a model and it does it. but we are the decision authority.
as long as we maintain that delineation, humans will be the biggest threat to humans...
I agree with your “throttle” observation. So you think AI will be used by people to help or harm other people? Do you think we have the capacity or even the capability to maintain that “delineation” given the exponential rate of change?
every human innovation from using fire to developing calculus, from creating language to levers, conquering flight to the WWW was intended to solve a problem or take advantage of an opportunity. societies decide which innovations are adopted and which fall aside (do you still use your CB radio?). societies adopt innovations at different rates, and from different starting points.
every innovation has likewise been used to do good and harm - the tools we humans use change, but human nature tends to remain the same...
we certainly have the capacity to control AI. i say this because we've had nuclear weapons my whole life, and no nuclear wars... yet. we stopped the use of chemical weapons in war (for the most part). we haven't released deadly biological weapons... yet.
when it comes to the question of handing over control to AI, i'm less concerned about a "greedy" or "power hungry" person handing over the reigns to AI, and more concerned about those people that think "humans are the problem" so to "save the planet", or "save us from ourselves", or "save the oppressed from the oppressors" we need to give control to the AI...
I think human nature is so all over the place crazy that there is no way I could ever decide what will be the final decisions made by the people on this planet. I do know that the timeline to make any decision good, bad or indifferent is short, to short.
circa 1997, experts (including self-proclaimed experts) were saying that by 2015 all mundane tasks would be eliminated by the WWW and technology. carrying keys, keeping track of appointments, making shopping lists, looking for your reading glasses, etc., etc., etc. we/they could imagine minor changes so provided concrete examples.
now, many of the same experts (that got the 2015 predictions wrong) are struggling to understand how the models will advance, and how society will use them, so they use "passive" language in their descriptions. you caught that. it doesn't mean it will become bad or good. it means the experts don't know, and that potentially they learned from their mistaken forecasts of the 1990's...
Without understanding what consequences mean to humans, how can A I make decisions for humans?
Another observable factoid seems: Why is A I currently mostly used AGAINST humans (fire people, ruin people's lives, engage in surreptitious surveillance, etc.) , but when it comes to giving basic advice to help humans in various tasks, A I turns into a "Gee, I know only 3% of what I should know machine, and I don't care, because I have no feelings"??
Without doubt in my mind, A I is generally used against humans. Why would that be?
Why would humans invent a machine that, on balance, is used to hurt humans, and makes you feel great about it?
Any chatbot that is supposed to help a customer in any business you can think of, is woefully, totally inadequate for that intended purpose. Surely you have encountered that yourself.
Can you explain how you would prevent that from happening?
I am not coming to these questions from a "moralistic, religious, political, leftist, rightist, socialist, communist, absurdist, atheist, philosophical, etc. etc. et.c point of view".
I just see that A I, when it is supposed to work in favor of humans, comes across like a quickly put together folded paper airplane, whereas when it comes to work against humans, it is smarter and more expensive than 50 nuclear missile. Are you or anyone else aware of this discrepancy?
For me, it just appears somewhat logical that, if you want "a humanity that prefers to live peacefully with each other", then whatever A I you invent must and needs to be geared in a fashion that is at least slightly biased towards that end. And, without doubt, I don't see A I developing at all into this direction.
using your logic, "process engineering", lean-six-sigma, continuous improvement and all other efficiency efforts are generally used "against" humans. likewise, computers and computer software are generally used "against" humans.
i say this because the purpose of IT is to automate tasks that don't require human judgement - which replaces work done by humans, and given enough automation, replaces jobs and employees.
now comes AI and it seems we can automate tasks that "once" required human judgement... AI is a statistically based extension of legacy computer programming to apply human judgement to situations by predicting what a human (expert) would do based on past activity (data).
AI is already working for humans, as it is making many processes more efficient which benefits consumers, owners, and eliminates tasks that are boring and onerous?
i frankly don't get the point of your paper plane, nuclear missile analogy...
AI is just the most recent efficiency tool. we advanced efficiency with innovations like the wheel, farming, animal powered farm implements, wind powered boats, steam powered devices, combustion powered machines, electrical powered devices, the printing press, telegraphs, radio, tv, computers, and the internet. each of these innovations and more made humans more efficient, allowed us to specialize to be even more efficient, and now comes AI. the main difference with AI, in my view, is that it makes decision making more efficient and that has traditionally been immune to innovation... now programmers, accountants, financial analysts, economists, claims adjusters, sales people, and other decision making jobs (that were once safe) are threatened.
i'm not as smart as anyone on your panel. but then i was never as smart as the F100 board or senior executives that i harassed for over 20 years either. i added value by seeing the mistakes they were making, the bad assumptions they were making. i can do the same here.
llm's are a big unexpected jump, however, they are trained on readily available data. that means the llm's are dependent on what people "publish". and there is a big problem because:
- we publish fiction
- we publish partial truths
- we publish straight up lies
- we don't publish "secret" information
- we hide negative information
- we publish culturally shaped data that radically conflicts (US, UK, China, Iranian, et al)
- we publish "value judgements" as if they are fact
- we publish "facts" without providing the context
- we republish information
- we often hide the source of information
- data represents "everyone" acting online, but that leaves out a lot of people
- the masses (less educated and successful) provide much more data than leaders
- one "expert" publishes data, then thousands of non-experts respond (often with gibberish)
- minors (never worked or paid taxes) publish more than adults with families, debt, assets, etc.
so, yes, we can use llm's to improve the productivity of many processes, and we can solve problems that have been unsolvable (including problems that are broad and interconnected), but we are still several improvements away from llm's that solve all our problems or overtake our role in society and governing.
I keep thinking of AI models in general as babies. What happens when they turn into rebellious teenagers? How quickly will that take place or will it take place? I also recognize that humanizing a machine is part of my own personal Achilles heel.
It's interesting how you talk about babies and teenagers. A new book called Raising AI does something similar. I know the author, and he has spoken at one of my events a couple years ago in SF. I just got the book and happened to be reading the preface just today. You might check it out: https://www.amazon.com/Raising-AI-Essential-Parenting-Future-ebook/dp/B0DD3CV7GB/ref=tmm_kin_swatch_0?_encoding=UTF8&dib_tag=se&dib=eyJ2IjoiMSJ9.eGriZyjBLNX35DbrEyufUhGR-Iu5gyo5m6NCd22_WiY.TX0CBbUfrw5gkffQjD0imMkybUR_Zk8riUbaPO53Kik&qid=1749567573&sr=8-1
Tweens already! I guess I should once again speed up my own timeline.
i can see a disgruntled employee, disgruntled contractor/third party, or a hacker unleashing a "disruptive" AI on a company, but i've yet to see an AI model that initiates its own goals/objectives/motives... i've also yet to see an AI model that adds functionality to itself...
on the other hand, its hard to imagine that AI based viruses (surely exist at the national level - aka NSA, DOD, CIA, FBI, PRC, NKor, Israel, et al) aren't already operating. i'm aware of two levels (up to four depending on how you view infrastructure) of protection. the same national entities (and organized crime entities) that would create such software (i know someone that was essentially CTO for the NSC), are defending their nation and its organizations (companies, schools, utilities, etc.) against such software (level 1), the other level is individual organizations. Even tiny companies have a firewall. Large enterprises and government entities typically have complex (the primary contemporary issue) security systems.
Good to know. Thank you for your willingness to share your expertise and your thoughts.
interesting... having built models, i think of them as software...
humans do anthropomorphize. in the 70's, i remember peers referring to software as "he" or "she". A specific quote from Karen Walsh about JES2 (IBM stuff) was, "What's wrong with JES2? Is she down?" i thought, how odd, it is a computer program...
the first book i remember reading on this topic was "The Adolescence of P1". oolcay itway
I keep seeing the words “exponential or exponentially “. What I don’t see is any real understanding of just how significant that concept is to our developing world. The rate of change and growth in all fields of human endeavor is so fast, that it is almost impossible to imagine. What happens when these endeavors reaches the point of unimaginable? How fast will we reach that point? Tomorrow? I am beginning to believe that the timeline to this massive explosion in technology is far shorter, more complicated and earth shattering than we have to this point anticipated.
at this point, percentage wise, and criticality wise, few processes and decisions have been turned over to ML/AL models.
humans determine the rate of societal change, not technology. as a society, we throttle change because we don't like uncertainty and change causes uncertainty.
i view AI/ML models as tools. we ask for viewpoints and they provide them. but we are the policy makers. we assign a task to a model and it does it. but we are the decision authority.
as long as we maintain that delineation, humans will be the biggest threat to humans...
I agree with your “throttle” observation. So you think AI will be used by people to help or harm other people? Do you think we have the capacity or even the capability to maintain that “delineation” given the exponential rate of change?
every human innovation from using fire to developing calculus, from creating language to levers, conquering flight to the WWW was intended to solve a problem or take advantage of an opportunity. societies decide which innovations are adopted and which fall aside (do you still use your CB radio?). societies adopt innovations at different rates, and from different starting points.
every innovation has likewise been used to do good and harm - the tools we humans use change, but human nature tends to remain the same...
we certainly have the capacity to control AI. i say this because we've had nuclear weapons my whole life, and no nuclear wars... yet. we stopped the use of chemical weapons in war (for the most part). we haven't released deadly biological weapons... yet.
when it comes to the question of handing over control to AI, i'm less concerned about a "greedy" or "power hungry" person handing over the reigns to AI, and more concerned about those people that think "humans are the problem" so to "save the planet", or "save us from ourselves", or "save the oppressed from the oppressors" we need to give control to the AI...
what do you think?
I think human nature is so all over the place crazy that there is no way I could ever decide what will be the final decisions made by the people on this planet. I do know that the timeline to make any decision good, bad or indifferent is short, to short.
yep. every human decision is emotional. your preferences versus mine... economics 101.
circa 1997, experts (including self-proclaimed experts) were saying that by 2015 all mundane tasks would be eliminated by the WWW and technology. carrying keys, keeping track of appointments, making shopping lists, looking for your reading glasses, etc., etc., etc. we/they could imagine minor changes so provided concrete examples.
now, many of the same experts (that got the 2015 predictions wrong) are struggling to understand how the models will advance, and how society will use them, so they use "passive" language in their descriptions. you caught that. it doesn't mean it will become bad or good. it means the experts don't know, and that potentially they learned from their mistaken forecasts of the 1990's...
Without understanding what consequences mean to humans, how can A I make decisions for humans?
Another observable factoid seems: Why is A I currently mostly used AGAINST humans (fire people, ruin people's lives, engage in surreptitious surveillance, etc.) , but when it comes to giving basic advice to help humans in various tasks, A I turns into a "Gee, I know only 3% of what I should know machine, and I don't care, because I have no feelings"??
Without doubt in my mind, A I is generally used against humans. Why would that be?
Why would humans invent a machine that, on balance, is used to hurt humans, and makes you feel great about it?
Any chatbot that is supposed to help a customer in any business you can think of, is woefully, totally inadequate for that intended purpose. Surely you have encountered that yourself.
Can you explain how you would prevent that from happening?
I am not coming to these questions from a "moralistic, religious, political, leftist, rightist, socialist, communist, absurdist, atheist, philosophical, etc. etc. et.c point of view".
I just see that A I, when it is supposed to work in favor of humans, comes across like a quickly put together folded paper airplane, whereas when it comes to work against humans, it is smarter and more expensive than 50 nuclear missile. Are you or anyone else aware of this discrepancy?
For me, it just appears somewhat logical that, if you want "a humanity that prefers to live peacefully with each other", then whatever A I you invent must and needs to be geared in a fashion that is at least slightly biased towards that end. And, without doubt, I don't see A I developing at all into this direction.
using your logic, "process engineering", lean-six-sigma, continuous improvement and all other efficiency efforts are generally used "against" humans. likewise, computers and computer software are generally used "against" humans.
i say this because the purpose of IT is to automate tasks that don't require human judgement - which replaces work done by humans, and given enough automation, replaces jobs and employees.
now comes AI and it seems we can automate tasks that "once" required human judgement... AI is a statistically based extension of legacy computer programming to apply human judgement to situations by predicting what a human (expert) would do based on past activity (data).
AI is already working for humans, as it is making many processes more efficient which benefits consumers, owners, and eliminates tasks that are boring and onerous?
i frankly don't get the point of your paper plane, nuclear missile analogy...
AI is just the most recent efficiency tool. we advanced efficiency with innovations like the wheel, farming, animal powered farm implements, wind powered boats, steam powered devices, combustion powered machines, electrical powered devices, the printing press, telegraphs, radio, tv, computers, and the internet. each of these innovations and more made humans more efficient, allowed us to specialize to be even more efficient, and now comes AI. the main difference with AI, in my view, is that it makes decision making more efficient and that has traditionally been immune to innovation... now programmers, accountants, financial analysts, economists, claims adjusters, sales people, and other decision making jobs (that were once safe) are threatened.