16 Comments
User's avatar
drllau's avatar

I think you're missing one ... societal _trust_ ... sure AI, bio-engineering and knowledge-composition are all sexy but as a species, our "superpower" vs other animals is co-ordination, (though tool-making, and written language are also high). To coordinate (rather than compete in zero-sum games) requires some degree of overlapping word-views, from which trust can emerge. I point out the early form of contract was convenant (as in Judeo witness by God) under seal. If you look at real property, we've progressively moved from social trust (I know XYZ and can shame/guilt you for failure) to institutional trust (courts, cultural norms, business practices).

We are in the process of converting this trust to computational law (RegTech) which means a separation of state-sovereignty towards more effective superclusters

Expand full comment
arthur smith's avatar

covenant was also enforced with weapons, because some people have no shame...

institutional trust is enforced by nations with police forces and armies...

1.) please define super cluster.

2.) explain how a "supercluster" enforces property rights?

Expand full comment
drllau's avatar

> because some people have no shame...

shame, guilt and face are societal constructs - see Lessig's Code 2.0 and other laws of Cyberspace. When the group is large enough that you escape social probity, then that's where institutions of courts/judges, credit rating bureaus, accounting standards come into play. The public has some confidence in stock market because they (supposedly) adhere to a degree of transparency in their account (yeah fraud still happens) and we have whistle-blower legislation and related to encourage truth-revelation (aka short-selling) and enforcers such as SEC.

Superclusters come from agglomeration theory but not just spatial but along value networks. Look at the fab supercluster around TMSC which ranges from exotic materials to capital equipment to clean rooms. UK in its imperial glory days had clusters in marine shipbuilding, now reduced to Lloyds insurance and international law. Property rights are slowing migrating to web3.0, much like chess has a small set of rules, they can get encoded into standards eg ERC-20 for fungible tokens which if clustered and backed by liquid reserves become stablecoins. Enforcement comes from a mix of computational forensics plus self-executing contracts. Watch https://www.youtube.com/watch?v=UIBR99gOLOQ

Expand full comment
arthur smith's avatar

i understand those descriptions. thank you.

"Enforcement comes from a mix of computational forensics plus self-executing contracts". i wouldn't use the word enforcement, but i get your point. that seems more like record keeping than enforcement, but it is certainly part of enforcing/ensuring property rights.

Expand full comment
drllau's avatar

if one limits contracts to exchange of promises (can be more complicated) then the courts are there to enforce such promises. Under automaton, execution is faithful (short of an EMP wiping all records) so the issue shifts to pre-exection, or the bargain stage which still have problems of power disparity. A transaction occurs with a knowledgeable buyer and willing seller so you can observe that many rules are established to prevent asymmetric info (lemon problem) or coercion/unconscionable conduct (stealing lollipops from kids or economic equiv of cat-phishing).

Then there's the prior problem of defining/granting property rights as many economists eg Soto noted in colonial land grabs or privileged position eg Soviet privatisation.

Expand full comment
arthur smith's avatar

interesting phraseology, but your point is not clear. are you elaborating on what i wrote or correcting what i wrote? or both?

limiting contracts to "exchange of promises" is like building a roof without a foundation and walls...

1. automation execution being "faithful" presumes the automation is error free... and even then the automation is dependent on the inputs... GIGO...

2. of course, there are other transactions besides buyer - seller...

both 1. and 2. depend on regulations, social values, and competence of the entities involved

i haven't read much by de Soto. but as an economist, what would he, and all the other economists, do if they couldn't go around pointing out the flaws in various situations.

regardless, in re de Soto, exactly what is the problem with defining/granting property rights you are referring to? i ask because he points out more than one...

Expand full comment
arthur smith's avatar

3 informative videos.

i'm hearing we can likely fall back on Maslow's hierarchy of needs (health, security, societal coherence) to establish WHY we should use AI, biotech, and new energy moving forward.

WHAT to do will be a challenge. Expect conflicts between the role of government versus individual liberty. The role of government v NGOs. Conflicts between developing nations and developed nations. Conflicts between nations that are democracies and nations that are autocratic.

HOW to act will be an even bigger challenge with groups advocating for equity v merit, humans v nature, rights v privileges, etc.

So, i expect we will clumsily stumble forward and coalesce much later... not unlike the era of globalization.

Expand full comment
Peter Leyden's avatar

Yes, you outline some of the big challenges. That is generally what I am turning to develop more in the book version of this project. I will be doing monthly essays on pieces of that iteration, and the next physical event in the fall will focus on the economic implications and how the economy will need to morph in the next 25 years. In other words, what to do. Or what we could do.

Expand full comment
drllau's avatar

> how the economy will need to morph in the next 25 years. In other words, what to do

data driven decision making? Rather than go-with-guy feelings (I trust Putin) try to be objective in screening/sorting/selecting options?

> Or what we could do.

technology marches on but we're still _monkey brains_ with digital fistcuffs so mistakes are made faster. On the human side, improved empathy and retaining an AI klll-switch will help keep humans-in-loop for any major decision.

also remove dopamine-addictive apps (or treat as mental health issues) ... we've already gotten to stage of shorter attention span than goldfish.

Expand full comment
arthur smith's avatar

I'm reminded of the Cluetrain Manifesto, which was influential...

Expand full comment
Maksim Raskolnikov's avatar

It feels a bit understated what Arthur Smith says (below or above): "Expect conflicts between the role of government versus individual liberty." Because that is like calling "WW2 was an unfortunate hickup on the way to economic equilibrium", or explaining religious warfare by saying "Some people believe different things, so there were some minor disagreements". Also: "Global nuclear war may cause some adjustments in stock market pricing".

I might as well add: "Artificial Intelligence with make Trump smarter and his followers will become really nice folks" This appears to be a good description of the Peter Leyden visionary insight.

I guess I should continue in this fashion, just to fit into this futuristic cult here: "Artificial Intelligence, if it works out as planned, will cause an extra 50 million unemployed people within 10 years, but since the perpetual Trump administration is known for loving kindness, all these folks and their families will be taken care of in the most merciful Christian way by the richest MEGA Maga churches, since they took over the work of the Social Security Administration"

Naa, forget it, I will look for a job at the Onion.

Expand full comment
arthur smith's avatar

not really sure what point(s) you are trying to communicate. i'll give it a go though:

1.) the future is bleak (expect economic wars, religious wars, nuclear war)

2.) AI is sarcastically promised to change political views you don't like

3.) AI will do far more harm than good (almost anti-malthusian) like was predicted about many innovations

4.) the role of government is to take care of people (save the oppressed from oppressors)

5.) government love and kindness fixes everything - well except $35 trillion debt, increasing % of nation on welfare, growing wealth gaps, growing tax gaps (40% pay zero income tax, top 10% pay 70% of tax)

6.) Christianity will sarcastically pick up where government fails

you forgot:

> environmental collapse

> destruction of democracy

> skynet

your cult sounds a bit depressing...

Expand full comment
drllau's avatar

the AI story is far from finished but the current implementation is certainly primitive ... I blame Hollywood for creating artificial expectations of "AI" when in fact, at the moment a general purpose transformer (GPT as technical construct) is an idiot parrot with very large dictionary. As experts noted https://garymarcus.substack.com/p/llms-are-not-like-you-and-meand-never, current AIs lack real-world models which meat-sack humans have an innate framework for time, space and causality.

1. there will always be conflicts whether resource or ideology ... AI can be seen as more accelerant (shortening the decision cycle) rather than cause.

2. the spectrum of views always existed, it's just you could physically move to find affinity somewhere ... hate the CA hippie commune? relocate to live with Texan rednecks ... its just that social media algorithms deliberately recommend posts which inflame audience segments to get them to stay longer and get exposed to ads

3. the future of AI (in its generic form and not current hallucinating idiocy) yet to be written, as potential user, you can pick how to interact

4. govts ... complicated but let's make a distinction between rule of law (and other public goods) vs rule **BY** law, one can be protected (eg property rights) or compelled (by surrenderred authority) so finding the right balance is never-ending

5. wasn't it Reagan that said the most feared words (from govt) is "I'm here to help you"? Public choice theory notes that once bureaucracies are established to "solve" problems, they often persist past their useful date and may even perpetuate the problem to justify existence. However, a good govt is still a positive ... cf Singapore neo-confucism as anarchy sucks (see Afgahnistan)

6. xref wicked problems https://en.wikipedia.org/wiki/Wicked_problem as progress would have solved the simple ones (sewage, antibiotics, markets, etc ...)

Expand full comment
arthur smith's avatar

i didn't get these new messages from you prior posts...

1. obvious, agree.

2. obvious, agree.

3. true today - you can pick how you use AI. i have found some very useful applications.

4. eh, when one is being protected, someone else is being imposed upon... rule by law = rule of law...

5. "public choice theory", eh, that is true of every human organization. once an entity garners enough of what it seeks, the entity shifts from innovation to optimization. PS - every human creation was intended to solve a problem...

6. i had enough of a former CIO cramming "wicked problems" down my throat. hated the MIT article... or was it HBR...

Expand full comment
drllau's avatar

> 3. true today - you can pick how you use AI. i have found some very useful applications.

except that the power curve of thoughful moral use-cases is outweighed by the stupidity of the dopamine-addicted masses. I'm reminded of the very early days of commercialising the internet late 80s when the early adopters were the ... err ... mature adult industry who went from brown-package delivery to enthusiastic eCommerce adopters. Similarly we're seeing morally dubious if not border-line criminality in (ab)use of AI (deep-fakes, slopBots, pump&dumps, etc). This tells you cannot separate out tech from the social context, which was the original point about societal trust. As a collective, a group needs to determine what are norms and "acceptable", and craft tools (legal or otherwise) to rein in the negative aspects. When you get wide gaps in norms (cf freedom of choice vs right to live) you get conflict if not outright cultural wars (usually resolved by exile or worse, assassination).

So AI, biotech, knowledge-composition (esp what gets deliberately ignored) raises more questions than offers solutions. To what extent should humans be "modified"? Correcting gene defects is a narrow but safe application but what about selecting for IQ or some physical cosmetic trait? And will "superhumans" be allowed to participate in Olympics. Who gets to determine the dark side of science or what constitutes "terror" (apparently some claim there is a distinction between state-sponsored and state-sanctioned). And who gets to control AI? individuals, the techno-priesthood, NGOs, or what passes as representative governance? (eg a gerontocracy favors tax cuts as passes debt burden to unborn generations)

Expand full comment
arthur smith's avatar

this is what you meant in your original comment? wow. don't see that. okay. well, it seems we agree 100% or near 100% - assuming i'm interpreting this comment accurately.

every innovation humans develop (from language to fire to the wheel to the atomic bomb) is built to "solve some problem" - human 101.

of course, many innovations then get used in ways that some people see as being nefarious, while others view the use as righteous - human 101.

Expand full comment