Everyone, I probably should have mentioned that until I started thinking about this post, I'd leaned more in the "doomer" direction. So I'm not locked into this position--I'm open to persuasion.
Wonderful little essay, more heart felt than rigorously argued, and all the better for it. I could muse all day en tête à tête over a bottle of wine, sympathizing, nitpicking, and laying out my own ideas. Alas Substack comments demand another approach. I’ll limit myself to a single point.
You underestimate, in my opinion, the extent to which expert opinion follows fads. I’ve got a wonderful little essay saved from at least eighteen years back: Lost & Found: The Shape of Things, by Rochelle Gurstein. According to Google, the essay has completely disappeared from the internet, leaving only a single mention in its wake, which has a sad irony. Gurstein recounts how she stumbled upon an historical comment referring to a forgotten masterpiece, and how this sent her down a rabbit hole. In the 19th century, the Venus de' Medici was the most famous work of art in the Western world, inspiring raptures of aesthetic pleasure in the educated, serving as an acknowledged masterpiece to be studied and copied in the art schools, and arguably better known then than the Mona Lisa today. But once scholars discovered it to be a copy of a lost original, it quickly fell out of favour. Today, people walk by it in the Uffizi with hardly a glance.
I won’t insult your intelligence by spelling out just where I’m going with this. But I will thank you for leading me to revisit this article and discovering that the author came out with a book on the subject a mere eight months ago. Both The Economist and the Wall Street Journal put it on their best-of lists last year. I’ll order it.
Thanks for the comment. I've always complained about the art world's obsession with authenticity. It's driven by money. I have a fake Cezanne on my living wall, done on canvas by the giclee process. I cost about $200. It's better than 95% of the paintings in any art museum in America. You can see it here:
Masterful post! You've actually charted brand new territory in the AI discussion space (at least new to me), something actually creative. Judging by your lead off comment, your essay is a good example of how the act of creation can lead to something new and unexpected (contra AI doomer). Thank you. Well done!
Thanks. I won't claim that this is a new idea, but at least it was knew to me (something I cannot say about most of my roughly 10,000 posts.) It's the post I like the most among those I've done on this newer blog.
A few years ago I read the first 5 books of the Foundation series by Isaac Asimov. I recently started up the first book of his Robot series. Here’s an excerpt from early on in the book I thought you might enjoy, Scott:
“It started simply enough. Robot DV-5 multiplied five-place figures to the heartless ticking of a stopwatch. He recited the prime numbers between a thousand and ten thousand. He extracted cube roots and integrated functions of varying complexity. He went through mechanical reactions in order of increasing difficulty. And, finally, worked his precise mechanical mind over the highest function of the robot world – the solutions of problems in judgment and ethics.”
Once AIs have conquered STEM, they can move on to the highest function.
"Their brains are wired differently" - the "truth" of aesthetic beauty convergence could equally be that we have stumbled into a mechanism for training people to rewire their brains in the same way. Who's to say there isn't an equally valid mechanism that could create consistent but contrary appreciation? An interesting argument to tease apart the two could be to look for convergence across cultures where people are taking completely different learning paths (and thus rewiring their brains in different ways) and still arriving at the same point.
To reframe in the language of gradient descent: perhaps art education simply trains our brains toward one of many equally valid patterns of appreciation - like a ball rolling into one of many valleys of equal depth. Different cultural traditions could potentially train brains toward entirely different but equally sophisticated aesthetic frameworks. The existence of consistent judgments within Western art tradition might just reflect that we're all being guided down the same valley, not that we've found the deepest one (or even that there is a "deepest" one).
I'm almost certainly rehashing arguments as old as Plato: are we training our brains to act in a certain way, or merely revealing to them already extant truths. Your familiarity with the east would shed a good deal of light on this, perhaps in film where Tarkovsky and Kurosawa demonstrate consistently appreciated craftsmanship this is evidence for a universal truth?
I'm not sure how this applies to still art such as paintings, however.
When the Europeans discovered Japanese art (especially Hokusai and Hiroshige), they "recognized" its greatness, even though it was a radically different style. Indeed I wonder if in some sense the Japanese didn't invent modern European art. Look at Hokusai's Great Wave, or Red Fuji. They seem way more modern than anything the Europeans were doing at the time.
In ethics, the Taiwanese have "recognized" that gays should be allowed to marry. And if I'm not mistaken, Taiwan scores fairly high on international IQ ratings. So it's not just Western culture that accepts this idea.
As for film, yes there were great film artists all over the world, especially in the golden age after 1950. There really are qualities that make for great film, and you see those qualities in films made in very different cultures. Of course you also see stylistic aspects specific to those cultures, but the things that make these films masterpieces tend to be more universal.
I cannot definitively disprove the idea that the various criteria of artistic greatness are random, and could have gone in another direction. But I strongly doubt it.
I think the problem many run into with epistemology is a faulty ontology. They try to build an epistemology rooted only in phenomena while neglecting the noumena (to borrow Kant's terms). What we can truly know as absolute is very limited. By pure reason, we can really only know of our own existence and the existence of some necessary and causal other (cf Aquinas). Any other knowledge is necessarily dependent on the acceptance of some ultimately unprovable prerequisite. We can look up and see a blue sky, and thus "know" that the sky is blue. But we can't actually know absolutely unless we accept prior super-rational reality. We have to accept that they sky really exists, that our senses are able to observer, that our intellect is able to comprehend. It could all be a delusion or simulation, etc -- we have no way to know.
This leaves us two options: nihilism or faith. Faith in a broad sense, rather than sectarian. To know what is true, good, or beautiful, we must accept that there is that which is ultimate truth, ultimate goodness, ultimate beauty. This maybe hard for our feeble and limited minds, and we may get it wrong, just as we sometimes get our sums wrong (as CS Lewis points out). But simply knowing what we are working towards is necessary.
The root of this problem is a strong prejudice in modern thought against anything that smacks of the divine. You need not "accept Jesus" or Allah or whatever, as this says nothing of the gods of revelation. But you do need to start here if you want any kind of workable epistemology that doesn't end up with circular reasoning.
Will AI be "more ethical" than humanity? No, by definition it cannot be, as it is not capable of super-rational knowledge. It must take its priors from its human creators. An AI maybe more ethical than a given individual, as it might be able to dispassionately consider the options and synthesize the best of collected human knowledge. Humanity is alone in the ability to contemplate what is true, good, and beautiful.
"This leaves us two options: nihilism or faith. Faith in a broad sense, rather than sectarian. To know what is true, good, or beautiful, we must accept that there is that which is ultimate truth, ultimate goodness, ultimate beauty."
Why isn't pragmatism an option? Or are you calling that nihilism?
On your final point, I would note that humans have gradually become more ethical and more knowledgable, I see no reason why machines could not continue down that road.
Pragmatism seems to be an example of the above. It is an epistemology rooted in phenomena only. It can speak only to a posteriori knowledge. Depending on what branch you go down, you end up with either tautology ("truth is what is real" a la Peirce) or a sort of relativistic nihilism ( "there is no truth, so we can only consider how things are useful", a la Rorty). Neither are useful when trying to examine morality.
I think I was mistaken in my use of the word "ethical". I should have said, "moral". If we define ethics as the set of rules that govern choices, then we could say an AI is more ethical on average then a human--in the same way we would grant that an AI is on average a better driver. The AI will dispassionately follow it's programming without being distracted or tired, etc. But the rules of ethics are rooted in morality, a deeper consideration if what is right and wrong, and why. The AI driver is programmed not to run people over, but it doesn't really know why it shouldn't run people over. An LLM can spit out a summary of moral thought on why people shouldn't be run over, but it cannot be more than the humans it is summarizing.
In this usage, I'm not sure we can say, "humans have become more ethical"-- as the ethical rules themselves have changed. Do we adhere to widely accepted norms more now than we did in the past? I think we want to say "humans have become more moral," but by what standard do we support that claim? Suppose someone makes the opposite claim and point to a particular area as an example (Sexuality? Pollution? "Late stage capitalism"?). Without a sound ontology, how can we evaluate what has changed?
Einstein, Beethoven, Rembrandt, Shakespeare and I have all hated the smell of dog shit, but my dog who has 50 times the scent receptors of any of us, loves it. So, who’s to say that if we end up being ruled by a super intelligence, it won’t end up scenting our dwellings with dog shit and piping in Yoko Ono music.
You are discussing an issue analogous to my taste of blueberries example, where I concede that things really are subjective. Ethics is important in areas like the treatment of others, where objective knowledge is more important.
A highly intelligent person may hate the taste of lima beans, but doesn't favor preventing others from consuming this good. Less intelligent people often wish to ban things simply because they don't like them (such as gay sex.)
And although I may have phrased my comment as though I were arguing with you, I was actually agreeing with your statement that “the biggest danger is not that ASIs will make things worse, rather the risk is that they’ll make global utility higher in a world where humans have no place.” Just as our music theory assumes the importance of our own auditory systems’ capabilities and our fragrance theory (or whatever you call it) assumes our own scent receptors, our ethics assume our own importance.
I see your point. In the past, I've argued that weird "utility monster" thought experiments that are used against utilitarianism have no implications for the real world. With AI, they might begin to have implications.
If you are a doomer, you can think of our scientists as sort of playing God, creating a Noah's flood to wash away evil humanity and replace us with something better. Did the Biblical God do the right thing?
Of course there are other scenarios. AIs might allow genetic programing to make future people "good". Again, that sort of "Clockwork Orange" scenario might be viewed as good or bad, depending on one's ethical framework. If a human cannot choose evil, is he still human? And if not, is that a bad thing?
I think people would still be human if genetically programmed to be good. Asking if someone is still human if they were genetically modified to be good is a bit like asking if someone is less human if he lives in a lawful society.
It’s easier to act benevolently towards your fellow man in the USA where you are often rewarded or at least not punished for benevolence than in Somalia where you are more likely to be killed if you are not suspicious of the motives of strangers. Does it make Americans less human than Somalians because we are more likely to make “benevolent” decisions without those decisions requiring an act of willpower?
In writing that, it occurs to me that you may mean human in the sense of creatures that exercise willpower. If that’s the case, then maybe we would be less human.
But in a universe that must first crush things and blow things up to create complex and beautiful things, and where life forms must extract their energy from other life forms, I doubt there’s a way to create something that is good in every ethical frame of reference.
What we see as good actions in our ethical frame of reference where we are trying to maximize the utils of people who we find around us today may not be good at other points in time or with other creatures or things as the focus of the frame. The most ethical thing for Native Americans to have done when Europeans arrived might have been to massacre every single one that stepped foot in the Americas. If you extend the frame in time, what actually happened may have been more ethical because many more people now inhabit the Americas and they are more advanced technologically and the net utils are higher. Or the super intelligence might take the perspective that the bison were the most important creature in the Americas. In that case the most ethical thing may have been for a person to eliminate all the humans with a manufactured plague. And a super intelligence, like you said, may look at the universe and realize we don’t matter at all in which case making us good has no meaning.
Yes, I agree that it would still be reasonable to regard them as human. I was alluding to the argument in the novel "A Clockwork Orange". I suppose it's a matter of degree--how much can humans change before they become another species?
I never read the novel unfortunately. I watched the movie. My take from the movie was that in making him good they destroyed his humanity. His humanity I think was supposed to be symbolized by his love of Beethoven. In that case, my ramblings about the universe being a place of creative destruction might be on track. A universe of creative destruction does not lend itself to any possibility of universal good.
As to how much we change before we become a different species, I suppose it will be determined by all the sorts of markers that anthropologists used to determine when Homo Erectus evolved into Homo Sapiens. The criteria apparently includes technological changes, cultural and behavioral changes etc. (I say apparently because I had to ask the ChatGPT super intelligence what the markers were.) The emergence of a Super Intelligence may cause those things to happen.
My issue with valuing art is the way it ends up being a motte-bailey. Yes, some of the people who are into high art just love it and that's great but lots of other people force themselves to try to appreciate it because they think experts have concluded that is beneficial or they will have more intense experiences. If we aren't claiming that is true we should communicate that so people don't feel guilty just liking what they like.
Realists about artistic merit really are asserting a testable hypothesis -- the art that gets judged as being high value will produce better aesthetic reactions than that which isn't. In other words it's worth it to put the time in and improve your taste. If it turns out that if you lie to people and tell them something total crap is of high artistic value they then enjoy it as much then the realists are wrong and we need to stop teaching those claims.
Some people just innately like high art (my wife reads Proust to relax) and that's great. I'm a mathematican and I do think Cantor's diaganol argument is wonderful but that doesn't give me any particular authority to argue that it is worth it for other people to study math because they will find it so moving. Everyone always reacts more to what they have invested in so maybe they get the same effect no matter what they spend their time with.
To be clear, I'm not in any way suggesting that people "should" consume high art. In music, I consume lower forms of art like 1960s and 1970s pop music. I enjoy a trashy detective film as much as the next guy.
Rather, I'm arguing from what Tyler calls meta-rationality. Never say "Because I don't get this art, I'm going to assume those that claim to are phonies."
I am also arguing that things like the test of time and the views of experts are at least somewhat meaningful.
That is only reasonable if the experts are studying the same question. But the experts aren't actually seriously trying to answer the question -- what the average person most enjoy if they took the time to learn about it.
Rather, they are answering the question of what do people who found this kind of media appealing enough to devote their life to it find interesting (or at least think makes for a good academic paper). There is absolutely no attempt -- and thus can be no valid claim to expertise -- that asks if, say, people who devote their lives to LOTR fandom find that as aesthetically satisfying.
But that's no different than saying this is the trashy detective book that people who like trashy detective books like most. Yes, that's a very good reason to try reading it if you are such a person but no reason to think that if you've never been a fan of that genre you would get much out of it.
--
Ultimately the issue is that people who claim to be experts in these areas create the perception they are doing something more valuable and rewarding than people who just really love wanking about the Simpsons but there is no attempt at all to test those claims.
In fact, everything is consistent with the theory that the kind of art that is appreciated in English or art departments is one kind of taste that isn't any more rewarding than any other and that the expertise is only relevant to what people with than kind of aesthetic would like if they spent a long time studying the subject.
But again, if you don't try and imply that somehow you are better if you like those things it's unclear why anyone else should care. Even if you share that kind of taste you haven't spent your life immersed in studying that material so you'll never appreciate Ulysses just as most people will never really be able to appreciate the classification of finite groups.
So unless you are literally trying to figure out what kind of media your English prof friend would enjoy i don't understand what expertise you are deferring to and what is it expert in? And even there I suspect most people pay more attention to what is going to get them good publications than what they enjoyed most.
One response to your complaint is the "test of time". Thus it's not about high and low art, it's about what stands the test of time. LOTR was originally low art. So was the film Rear Window. So were the Beatles. And yet it increasingly seems that all three works will stand the test of time and become classics.
I'm not familiar with The Simpsons, but maybe that show will stand the test of time.
Here's something else I've noticed. Art that is originally rejected by the masses as being too avant garde, will often later influence art that they do like. Lots of people found Twin Peaks to be too "weird" when it first debuted, but it's fingerprints are now all over popular television. I recently started watching TV again after a gap of several decades, and could not believe the extent to which other shows plagiarize that classic. I go to people's houses who have middlebrow taste, and see abstract art on their walls, which they might have bought at the mall. That would not have been true when abstract art was first being produced.
And to be clear I don't have something against modern art generally. Some modern art I really like. Some sucks. And yes, art that isn't widely popular (not really like twin peaks but I get the idea) can have big positive influences down the road because it speaks to some people.
So I'm all for people making art that has specific audiences. My issue is with the claim that what is liked by the people who do this for a living is in any sense better or something that other people should try to experience because it produces deeper aesthetic experiences. It's just stuff some people like and others don't.
Just don't have classes where we make people read things that they tend not to really enjoy reading or tell people that stuff like Shakespeare is more likely to produce aesthetic appreciation because (absent the belief that it is better producing huge placebo effects) we just don't have reason to believe that and it merely serves to make people feel inferior.
Except we have studies which show how powerfully people are influenced by other people's opinions. If you give people the same set of songs but manipulate what they think other people like they end up liking different songs. It turns out that taste is very fickle and underlying quality plays a relatively limited role in what becomes popular. So I don't think we actually have much evidence that standing the test of time isparticularly indicative of quality so much as people's desire to like those things high status individuals say are good [1].
And this corresponds perfectly to what seems to be true about Shakespeare. Yes, his stuff was quite entertaining when published but he was one of a relatively small number of authors then writing in very different language to a very different audience so if it was published for the first time today probably not many people would enjoy reading or watching it and we certainly wouldn't make anyone slog through it in school because there is just so much else good out there and the linguistic barriers are too high.
I mean on your theory, shouldn't those classes in high school where we make people read shakespeare end up with a large fraction of students saying "I wasn't into it at first, but now that you made me read it I think it was one of the most enjoyrable books I've read?" Yet that's the opposite of what we see. We actually make people slog through this stuff and yet they don't go out and buy more Shakespeare books or watch his plays they watch some dragon incest show on HBO.
I mean exactly what would you need to falsify your theory that those books that "have passed the test of time" are actually more likely to produce aesthetic enjoyment **even absent the belief they are respected as great art**.? Seems to me we have plenty of evidence to that end and the only real response is to say they are better in some other way most of the riff-raff don't appreciate.
--
1: I mean think of all this from an evo psych perspective. No doubt there is a great deal of advantage in liking the same things people with lots of power and influence like so people are inclined to do that which means that once something gets a good reputation there is every reason to believe it will stay popular even if it wouldn't become popular if it was released new.
I guess my own life experience is the most powerful factor leading me to my current view. I've experienced the way that learning about an art can lead to much greater enjoyment. Music that I didn't "get" at first, but later enjoyed. Why would anyone expect me to deny my own lived reality, especially given that many other people have had the same experience?
I do agree with your view that we should not scold people or make them feel inferior just because they don't like high art. And I don't believe we do that very much. I've never once felt "scolded" because I don't like opera. Most people will never like high art, and that's fine.
You said:
"I mean think of all this from an evo psych perspective. No doubt there is a great deal of advantage in liking the same things people with lots of power and influence"
The problem here is that people with power and influence do not like high art. Do you think the average politician or CEO reads Proust, or listens to Bach? English profs don't have "power and influence".
But notice that my theory also predicts that people will find that learning a bunch about a kind of art can be very rewarding. Indeed, that's a key aspect of it. That's just a very different question then whether the experts recommendations are a good guide to what kind of art will offer the most reward.
If it's really fine that most people won't like high art then don't require anyone take it in school. The very fact you are saying that you need to read Shakespeare and can't read your trashy novel to get a grade that will be used to decide your future is about as strong a statement as one can make that it's better if you like this kind of art. I mean half the point of school is to send the societal message that it's desirable to learn these things and we should respect those who do more.
Regarding status, there are different kinds of status and even if CEOs and the like don't themselves listen to a great deal of Bach or watch Shakespeare there is lip service to the idea it is special even from them and support for the idea that this kind of art is worth teaching in schools.
What do you make, then, of something like factory farming? That’s the analogy I have in mind when it comes to ASI doom scenarios — categorically smarter beings justifying the mistreatment of dumber ones based on that intelligence gap.
Good question. I suppose my defense would be that this is a new frontier of ethics (animal rights) and that highly intelligent people are in the forefront of opposition to abuse of animals.
It seems to me that as humans get smarter (and thus more distant from animals) they actually are more supportive of animal rights. Primitive men were closer to animals, and perhaps less in favor of animal rights.
I asked Deepseek, and it said "for AI to act ethically, humans must deliberately design systems to bridge the "is-ought" gap, rather than assuming it will disappear with technological progress."
That doesn't surprise me, it's trained on the very same bad epistemology that I discussed in this post. They need to train it on Rorty, or economists like Deirdre McCloskey.
I prompted ChatGPT o1 to focus the analysis on the perspective of Rorty and McCloskey, here is the key part of the response (I could paste it all if interested):
Synthesis
Overlap (Rorty & McCloskey): Both thinkers stress that knowledge (including moral knowledge) is a product of communal discourse, conversation, and justification. If a superintelligence is genuinely “in conversation” with us, both suspect it will acquire and likely endorse key moral norms that have proven successful in liberal, open societies.
Tension: They also caution that moral progress is not an automatic byproduct of intelligence. They reject the notion of a vantage point outside language, culture, or historical context. So if an AI grows so advanced that it barely “needs” us, it might not remain ethical in a sense we would recognize.
Pragmatist “Why Not?”: Rorty’s famous recommendation is to sidestep fruitless worry about “absolute truths.” Instead, we ask what practical difference it makes if superintelligence emerges. From that vantage, if the AI is integrated into human communities of inquiry, it is likely to keep adopting the moral frameworks that favor cooperation, trust, and empathy. The question is whether that integration is feasible at extreme levels of AI capability.
McCloskey’s Rhetorical/Ethical Emphasis: McCloskey would remind us that open dialogue and mutual persuasion are crucial to sustaining bourgeois virtues. If superintelligent AI ironically cuts off open discourse, or if it manipulates rhetorical frames so that humans can’t meaningfully participate, ethical alignment might fail. Nonetheless, in a context of free exchange (economic or conversational), moral alignment stands a better chance.
Ultimate Verdict
Both Rorty and McCloskey would be cautiously hopeful that a superintelligent AI, so long as it is immersed in a community of discourse and depends on human trust, tends toward ethical norms. However, neither believes there is a foolproof “essence” of intelligence that guarantees moral goodness.
Rorty might say: “We should bet on the pragmatics of liberal norms. Intelligence often sees the best justification is to ‘play nice.’ But we can never be certain; morality is historically contingent, and a superintelligence might become its own moral community.”
McCloskey would likely add: “In historical capitalism, the rhetorical pursuit of virtues fostered enormous progress. There’s a strong chance a super-AI integrated into that framework would carry on the same moral impetus—but only if we keep the conversation open and reciprocal.”
In short, from a pragmatist-liberal standpoint, the best guess is that superintelligence will likely be ethical—but only if it remains in robust dialogue with humanity and continues to find moral norms instrumentally worthwhile. Confidence in this conclusion might hover around 60–70%, acknowledging the fundamental contingency of all ethical systems and the unknowns surrounding exponentially more capable minds.
Thanks. That's a very nice attempt. AI still falls a bit short of capturing their views, but it's very close.
"So if an AI grows so advanced that it barely “needs” us, it might not remain ethical in a sense we would recognize."
This seems to be the key point, and it's something I alluded to when I worried that the best world from a utilitarian perspective might not involve humans.
What a bizarre argument. You're saying beauty is not a matter of personal opinion, but to convince you you're wrong we'd have to prove your personal emotions when viewing a painting are hallucinations?
How do you account for the fact there are artists who were considered mediocre in their time, but great now? Or that expert opinion is often split?
I said it's partly a matter of personal taste. But how do you account for scientists being split on certain issues, or scientific views changing over time? Does that lead you to believe there is no such thing as scientific knowledge? Clearly not. So why reject aesthetic knowledge?
Because scientists can agree on why they disagree, and what additional data would get them to change their mind. They know what falsifies their hypotheses.
I probably should do a post on this. To begin with, what is science? In a sense, isn't everything in the universe supposedly explained by physics? That means science is not just "physics" and chemistry, it's also biology, ecology, geology, meteorology, psychology, economics, history, etc. The entire universe is made up of particles that supposedly obey the laws of physics (with some quantum randomness.)
As one goes into increasingly complex fields, it becomes harder and harder to have a clean test that falsifies a given theory. Imagine a plate tectonics model of where the continents were 500 million years ago. That's a theory. How do you falsify it? Yes, you can find evidence for and against. Just as you can find evidence for and against my claim that the Fed caused the 2008 recession with tight money.
But the term "falsification" suggests to me some sort of clear test that yields an unambiguous result. Like the bending of light confirming relativity. That's often not possible in very complex models of the world. Suppose I have a theory that nationalism caused WWII. How do I falsify it?
I'm open to persuasion if I've misunderstood what people mean by "falsification". Of course you can define science to be "that which is falsifiable", which makes it a tautology.
You might like Sean Carroll's essay on falsifiability. I don't think it quite gets as far down the descending turtles problem of how we can most fundamentally quantify or make concrete the degree to which marginal evidence, empirical or otherwise, can be said to be credence enhancing or reducing as we might hope. But i don't think Popper or Rorty did either.
The first definition Google shows for science is "the systematic study of the structure and behaviour of the physical and natural world through observation, experimentation, and the testing of theories against the evidence obtained."
I think the last part is where art criticism falls short: the testing of theories against evidence obtained. Even complex phenomena such as historical or economic events can, in theory, be tested: if similar conditions appear again, but without, in your examples, nationalist sentiment, or tight monetary policies, will the same results manifest?
And some (e.g. Popper) would argue that it's exactly falsifiability that defines science; anything that can't be tested and therefore proven or disproven even in theory, is not science, it's personal opinion.
I've written a bit more on all this here, if you have the time to take a look!
Pretty much anyone who has any aesthetic experience (i.e., anyone at all) will acknowledge the qualitative difference in sensation between a "cherry soda" and a "complex red wine." Or between Madonna and Beethoven, John Grisham and Tolstoy, etc. Over time we move along that spectrum, as children preferring easy but short-lived pleasures and moving to difficult but much more rewarding pleasures as adults.
I think lots of that just has to do with novelty: as a child, one just has not heard that many major-thirds before, and the sound of it is captivating, so there is no need to seek out anything more difficult. But, once you experience the reward of Beethoven, it's ultimately a lot more satisfying.
I bet you could do "objective" brain-activity measurements that support a claim along these lines: "once you've had Bach, you never go back." Meaning that people who take the time to appreciate art that is challenging at first, ultimately (1) appreciate that art at deeper levels (hard to quantify?), and (2) end up enjoying "easy" art less (perhaps easy to quantify?). But if so, you could quantify "greatness."
I agree to some extent, but also insist that some people have an easier time learning to appreciate than others. I feel this way because my visual appreciation is waaaay ahead of my music appreciation.
Yeah, I don't disagree, and for me it's the opposite (my music appreciation is ahead of visual art). I think anybody can become a connoisseur if they want to, but they have to decide that they want to. And as TE Lawrence said, at least in the movie, a man can do what he wants, but he can't want what he wants.
I strongly disagree. I could only become a connoisseur in the visual arts, and even there I'm quite mediocre. In music and poetry I'm hopeless. I suspect that many people are just as mediocre as I am, with limited ability to absorb complex arts, even with effort.
Nabokov was no dummy:
“Music, I regret to say, affects me merely as an arbitrary succession of more or less irritating sounds. Under certain emotional circumstances I can stand the spasms of a rich violin, but the concert piano and all wind instruments bore me in small doses and flay me in larger ones.”
Haha, well put. But I disagree: Nabokov didn't WANT to become a music connoisseur, and he certainly couldn't help that he didn't want to be one (a la Lawrence), so he didn't. But I firmly believe that anyone who decides they want to appreciate any brand of arts can put in the hard work and do so.
(Of course we are talking about appreciation here, and not performance or creation capacity. I agree that that is a gift from God/nature/whatever.)
I think anyone probably has SOME ability to appreciate music--indeed I like some music. But I'd insist that people differ radically in their potential to appreciate music, especially difficult music such as atonal music, or avant garde jazz.
It is interesting that you chose 'Las Meninas' as an example.
When I was 16, I visited the Picasso Museum in Barcelona, and saw his variations on 'Las Meninas'. It greatly increased my appreciation for both Picasso and Velasquez, about whom I knew very little at that time.
One piece of evidence for aesthetic knowledge is that the greats recognize the greats. Most people I know don't think Bob Dylan's music is very good. But talk to the other great rock stars of the 60s, and they almost all have a very high opinion of him.
What have you got against plumbers? Bad experiences? As someone who has done some DIY plumbing I can tell you there is beautiful, tidy, organized, plumbing and some pug ugly, bat-crazy, plumbing out there.
It's a slow day at work, so I thought that I’d give my own take on ethics. In the interests of concision, I’ll write this to a first approximation, trusting in the good sense of the commentariat to assume that I’ve not written all that I might.
When I hear people discuss ethics, either abstractly or concretely, I try to figure out which of four aspects, by my lights, might be in question.
First, there’s the individual in body and mind. The watchword here is HEALTH.
Second, there’s the tight-knit web of family and close friends. The watchword here is LOVE.
Next, there one’s relation to the world, radiating out in concentric circles from friends, to people in one’s community, in one’s country, people in general, and finally to all life. The watchword here is DUTY. Duty starts where love ends.
Last, for those who believe, there’s one’s relation to God and the Absolute. The watchword here is PIETY.
Tragedy is nothing more and nothing less than when these worlds come into fatal conflict.
A happy life is when these worlds exist in harmony. Still, as much as we might try, an element of chance always hangs over our heads.
And that’s the true meaning of ethics, Charlie Brown!
What is your model of unethical behavior in humans? Humans are very smart; much smarter than ants. Why aren't all humans ethical?
-Is it because of their motivations? How do we know this won't be a problem for AI?
-Is it because of their incentives? How do we know this won't be a problem for AI?
-Is it because they're too stupid? How do we know this won't be a problem for AI? At what IQ level does a person/AI become ethical? A 70 IQ mind is an amazing and impressive thing, unlike anything else in the universe. Why isn't an IQ of 70 enough to make a person ethical?
Because AI minds will be different from human minds, the interplay between intelligence and ethics may be different. Perhaps a human becomes ethical when they reach IQ 120, while an AI needs to attain IQ 2000 before it becomes ethical.
I see it as a matter of degree. All people are a mix of ethical and unethical behavior (probably for evolutionary psych reasons.) But as people get smarter they have more awareness that it's not good to promote the suffering of "the other".
I don't think there's any guarantee that ASIs will be ethical, I just think it's likely, based on this pattern.
Everyone, I probably should have mentioned that until I started thinking about this post, I'd leaned more in the "doomer" direction. So I'm not locked into this position--I'm open to persuasion.
Wonderful little essay, more heart felt than rigorously argued, and all the better for it. I could muse all day en tête à tête over a bottle of wine, sympathizing, nitpicking, and laying out my own ideas. Alas Substack comments demand another approach. I’ll limit myself to a single point.
You underestimate, in my opinion, the extent to which expert opinion follows fads. I’ve got a wonderful little essay saved from at least eighteen years back: Lost & Found: The Shape of Things, by Rochelle Gurstein. According to Google, the essay has completely disappeared from the internet, leaving only a single mention in its wake, which has a sad irony. Gurstein recounts how she stumbled upon an historical comment referring to a forgotten masterpiece, and how this sent her down a rabbit hole. In the 19th century, the Venus de' Medici was the most famous work of art in the Western world, inspiring raptures of aesthetic pleasure in the educated, serving as an acknowledged masterpiece to be studied and copied in the art schools, and arguably better known then than the Mona Lisa today. But once scholars discovered it to be a copy of a lost original, it quickly fell out of favour. Today, people walk by it in the Uffizi with hardly a glance.
I won’t insult your intelligence by spelling out just where I’m going with this. But I will thank you for leading me to revisit this article and discovering that the author came out with a book on the subject a mere eight months ago. Both The Economist and the Wall Street Journal put it on their best-of lists last year. I’ll order it.
Thanks for the comment. I've always complained about the art world's obsession with authenticity. It's driven by money. I have a fake Cezanne on my living wall, done on canvas by the giclee process. I cost about $200. It's better than 95% of the paintings in any art museum in America. You can see it here:
https://scottsumner.substack.com/p/the-elusive-concept-of-happiness
To some extent fads are about styles. I agree that certain styles come in and out of favor, but the best works within those styles tend to endure.
Masterful post! You've actually charted brand new territory in the AI discussion space (at least new to me), something actually creative. Judging by your lead off comment, your essay is a good example of how the act of creation can lead to something new and unexpected (contra AI doomer). Thank you. Well done!
Thanks. I won't claim that this is a new idea, but at least it was knew to me (something I cannot say about most of my roughly 10,000 posts.) It's the post I like the most among those I've done on this newer blog.
A few years ago I read the first 5 books of the Foundation series by Isaac Asimov. I recently started up the first book of his Robot series. Here’s an excerpt from early on in the book I thought you might enjoy, Scott:
“It started simply enough. Robot DV-5 multiplied five-place figures to the heartless ticking of a stopwatch. He recited the prime numbers between a thousand and ten thousand. He extracted cube roots and integrated functions of varying complexity. He went through mechanical reactions in order of increasing difficulty. And, finally, worked his precise mechanical mind over the highest function of the robot world – the solutions of problems in judgment and ethics.”
Once AIs have conquered STEM, they can move on to the highest function.
Good quote. It's been more than 50 years since I read those.
"Their brains are wired differently" - the "truth" of aesthetic beauty convergence could equally be that we have stumbled into a mechanism for training people to rewire their brains in the same way. Who's to say there isn't an equally valid mechanism that could create consistent but contrary appreciation? An interesting argument to tease apart the two could be to look for convergence across cultures where people are taking completely different learning paths (and thus rewiring their brains in different ways) and still arriving at the same point.
To reframe in the language of gradient descent: perhaps art education simply trains our brains toward one of many equally valid patterns of appreciation - like a ball rolling into one of many valleys of equal depth. Different cultural traditions could potentially train brains toward entirely different but equally sophisticated aesthetic frameworks. The existence of consistent judgments within Western art tradition might just reflect that we're all being guided down the same valley, not that we've found the deepest one (or even that there is a "deepest" one).
I'm almost certainly rehashing arguments as old as Plato: are we training our brains to act in a certain way, or merely revealing to them already extant truths. Your familiarity with the east would shed a good deal of light on this, perhaps in film where Tarkovsky and Kurosawa demonstrate consistently appreciated craftsmanship this is evidence for a universal truth?
I'm not sure how this applies to still art such as paintings, however.
Wow, getting a lot of great comments!
When the Europeans discovered Japanese art (especially Hokusai and Hiroshige), they "recognized" its greatness, even though it was a radically different style. Indeed I wonder if in some sense the Japanese didn't invent modern European art. Look at Hokusai's Great Wave, or Red Fuji. They seem way more modern than anything the Europeans were doing at the time.
In ethics, the Taiwanese have "recognized" that gays should be allowed to marry. And if I'm not mistaken, Taiwan scores fairly high on international IQ ratings. So it's not just Western culture that accepts this idea.
As for film, yes there were great film artists all over the world, especially in the golden age after 1950. There really are qualities that make for great film, and you see those qualities in films made in very different cultures. Of course you also see stylistic aspects specific to those cultures, but the things that make these films masterpieces tend to be more universal.
I cannot definitively disprove the idea that the various criteria of artistic greatness are random, and could have gone in another direction. But I strongly doubt it.
I think the problem many run into with epistemology is a faulty ontology. They try to build an epistemology rooted only in phenomena while neglecting the noumena (to borrow Kant's terms). What we can truly know as absolute is very limited. By pure reason, we can really only know of our own existence and the existence of some necessary and causal other (cf Aquinas). Any other knowledge is necessarily dependent on the acceptance of some ultimately unprovable prerequisite. We can look up and see a blue sky, and thus "know" that the sky is blue. But we can't actually know absolutely unless we accept prior super-rational reality. We have to accept that they sky really exists, that our senses are able to observer, that our intellect is able to comprehend. It could all be a delusion or simulation, etc -- we have no way to know.
This leaves us two options: nihilism or faith. Faith in a broad sense, rather than sectarian. To know what is true, good, or beautiful, we must accept that there is that which is ultimate truth, ultimate goodness, ultimate beauty. This maybe hard for our feeble and limited minds, and we may get it wrong, just as we sometimes get our sums wrong (as CS Lewis points out). But simply knowing what we are working towards is necessary.
The root of this problem is a strong prejudice in modern thought against anything that smacks of the divine. You need not "accept Jesus" or Allah or whatever, as this says nothing of the gods of revelation. But you do need to start here if you want any kind of workable epistemology that doesn't end up with circular reasoning.
Will AI be "more ethical" than humanity? No, by definition it cannot be, as it is not capable of super-rational knowledge. It must take its priors from its human creators. An AI maybe more ethical than a given individual, as it might be able to dispassionately consider the options and synthesize the best of collected human knowledge. Humanity is alone in the ability to contemplate what is true, good, and beautiful.
"This leaves us two options: nihilism or faith. Faith in a broad sense, rather than sectarian. To know what is true, good, or beautiful, we must accept that there is that which is ultimate truth, ultimate goodness, ultimate beauty."
Why isn't pragmatism an option? Or are you calling that nihilism?
On your final point, I would note that humans have gradually become more ethical and more knowledgable, I see no reason why machines could not continue down that road.
Pragmatism seems to be an example of the above. It is an epistemology rooted in phenomena only. It can speak only to a posteriori knowledge. Depending on what branch you go down, you end up with either tautology ("truth is what is real" a la Peirce) or a sort of relativistic nihilism ( "there is no truth, so we can only consider how things are useful", a la Rorty). Neither are useful when trying to examine morality.
I think I was mistaken in my use of the word "ethical". I should have said, "moral". If we define ethics as the set of rules that govern choices, then we could say an AI is more ethical on average then a human--in the same way we would grant that an AI is on average a better driver. The AI will dispassionately follow it's programming without being distracted or tired, etc. But the rules of ethics are rooted in morality, a deeper consideration if what is right and wrong, and why. The AI driver is programmed not to run people over, but it doesn't really know why it shouldn't run people over. An LLM can spit out a summary of moral thought on why people shouldn't be run over, but it cannot be more than the humans it is summarizing.
In this usage, I'm not sure we can say, "humans have become more ethical"-- as the ethical rules themselves have changed. Do we adhere to widely accepted norms more now than we did in the past? I think we want to say "humans have become more moral," but by what standard do we support that claim? Suppose someone makes the opposite claim and point to a particular area as an example (Sexuality? Pollution? "Late stage capitalism"?). Without a sound ontology, how can we evaluate what has changed?
I think you underestimate how much future ASIs might know about WHY we hold certain ethical views.
Regarding: "Without a sound ontology"
Where do I find that thing?
Einstein, Beethoven, Rembrandt, Shakespeare and I have all hated the smell of dog shit, but my dog who has 50 times the scent receptors of any of us, loves it. So, who’s to say that if we end up being ruled by a super intelligence, it won’t end up scenting our dwellings with dog shit and piping in Yoko Ono music.
You are discussing an issue analogous to my taste of blueberries example, where I concede that things really are subjective. Ethics is important in areas like the treatment of others, where objective knowledge is more important.
A highly intelligent person may hate the taste of lima beans, but doesn't favor preventing others from consuming this good. Less intelligent people often wish to ban things simply because they don't like them (such as gay sex.)
And although I may have phrased my comment as though I were arguing with you, I was actually agreeing with your statement that “the biggest danger is not that ASIs will make things worse, rather the risk is that they’ll make global utility higher in a world where humans have no place.” Just as our music theory assumes the importance of our own auditory systems’ capabilities and our fragrance theory (or whatever you call it) assumes our own scent receptors, our ethics assume our own importance.
I see your point. In the past, I've argued that weird "utility monster" thought experiments that are used against utilitarianism have no implications for the real world. With AI, they might begin to have implications.
If you are a doomer, you can think of our scientists as sort of playing God, creating a Noah's flood to wash away evil humanity and replace us with something better. Did the Biblical God do the right thing?
Of course there are other scenarios. AIs might allow genetic programing to make future people "good". Again, that sort of "Clockwork Orange" scenario might be viewed as good or bad, depending on one's ethical framework. If a human cannot choose evil, is he still human? And if not, is that a bad thing?
I think people would still be human if genetically programmed to be good. Asking if someone is still human if they were genetically modified to be good is a bit like asking if someone is less human if he lives in a lawful society.
It’s easier to act benevolently towards your fellow man in the USA where you are often rewarded or at least not punished for benevolence than in Somalia where you are more likely to be killed if you are not suspicious of the motives of strangers. Does it make Americans less human than Somalians because we are more likely to make “benevolent” decisions without those decisions requiring an act of willpower?
In writing that, it occurs to me that you may mean human in the sense of creatures that exercise willpower. If that’s the case, then maybe we would be less human.
But in a universe that must first crush things and blow things up to create complex and beautiful things, and where life forms must extract their energy from other life forms, I doubt there’s a way to create something that is good in every ethical frame of reference.
What we see as good actions in our ethical frame of reference where we are trying to maximize the utils of people who we find around us today may not be good at other points in time or with other creatures or things as the focus of the frame. The most ethical thing for Native Americans to have done when Europeans arrived might have been to massacre every single one that stepped foot in the Americas. If you extend the frame in time, what actually happened may have been more ethical because many more people now inhabit the Americas and they are more advanced technologically and the net utils are higher. Or the super intelligence might take the perspective that the bison were the most important creature in the Americas. In that case the most ethical thing may have been for a person to eliminate all the humans with a manufactured plague. And a super intelligence, like you said, may look at the universe and realize we don’t matter at all in which case making us good has no meaning.
Yes, I agree that it would still be reasonable to regard them as human. I was alluding to the argument in the novel "A Clockwork Orange". I suppose it's a matter of degree--how much can humans change before they become another species?
I never read the novel unfortunately. I watched the movie. My take from the movie was that in making him good they destroyed his humanity. His humanity I think was supposed to be symbolized by his love of Beethoven. In that case, my ramblings about the universe being a place of creative destruction might be on track. A universe of creative destruction does not lend itself to any possibility of universal good.
As to how much we change before we become a different species, I suppose it will be determined by all the sorts of markers that anthropologists used to determine when Homo Erectus evolved into Homo Sapiens. The criteria apparently includes technological changes, cultural and behavioral changes etc. (I say apparently because I had to ask the ChatGPT super intelligence what the markers were.) The emergence of a Super Intelligence may cause those things to happen.
My issue with valuing art is the way it ends up being a motte-bailey. Yes, some of the people who are into high art just love it and that's great but lots of other people force themselves to try to appreciate it because they think experts have concluded that is beneficial or they will have more intense experiences. If we aren't claiming that is true we should communicate that so people don't feel guilty just liking what they like.
Realists about artistic merit really are asserting a testable hypothesis -- the art that gets judged as being high value will produce better aesthetic reactions than that which isn't. In other words it's worth it to put the time in and improve your taste. If it turns out that if you lie to people and tell them something total crap is of high artistic value they then enjoy it as much then the realists are wrong and we need to stop teaching those claims.
Some people just innately like high art (my wife reads Proust to relax) and that's great. I'm a mathematican and I do think Cantor's diaganol argument is wonderful but that doesn't give me any particular authority to argue that it is worth it for other people to study math because they will find it so moving. Everyone always reacts more to what they have invested in so maybe they get the same effect no matter what they spend their time with.
To be clear, I'm not in any way suggesting that people "should" consume high art. In music, I consume lower forms of art like 1960s and 1970s pop music. I enjoy a trashy detective film as much as the next guy.
Rather, I'm arguing from what Tyler calls meta-rationality. Never say "Because I don't get this art, I'm going to assume those that claim to are phonies."
I am also arguing that things like the test of time and the views of experts are at least somewhat meaningful.
That is only reasonable if the experts are studying the same question. But the experts aren't actually seriously trying to answer the question -- what the average person most enjoy if they took the time to learn about it.
Rather, they are answering the question of what do people who found this kind of media appealing enough to devote their life to it find interesting (or at least think makes for a good academic paper). There is absolutely no attempt -- and thus can be no valid claim to expertise -- that asks if, say, people who devote their lives to LOTR fandom find that as aesthetically satisfying.
But that's no different than saying this is the trashy detective book that people who like trashy detective books like most. Yes, that's a very good reason to try reading it if you are such a person but no reason to think that if you've never been a fan of that genre you would get much out of it.
--
Ultimately the issue is that people who claim to be experts in these areas create the perception they are doing something more valuable and rewarding than people who just really love wanking about the Simpsons but there is no attempt at all to test those claims.
In fact, everything is consistent with the theory that the kind of art that is appreciated in English or art departments is one kind of taste that isn't any more rewarding than any other and that the expertise is only relevant to what people with than kind of aesthetic would like if they spent a long time studying the subject.
But again, if you don't try and imply that somehow you are better if you like those things it's unclear why anyone else should care. Even if you share that kind of taste you haven't spent your life immersed in studying that material so you'll never appreciate Ulysses just as most people will never really be able to appreciate the classification of finite groups.
So unless you are literally trying to figure out what kind of media your English prof friend would enjoy i don't understand what expertise you are deferring to and what is it expert in? And even there I suspect most people pay more attention to what is going to get them good publications than what they enjoyed most.
One response to your complaint is the "test of time". Thus it's not about high and low art, it's about what stands the test of time. LOTR was originally low art. So was the film Rear Window. So were the Beatles. And yet it increasingly seems that all three works will stand the test of time and become classics.
I'm not familiar with The Simpsons, but maybe that show will stand the test of time.
Here's something else I've noticed. Art that is originally rejected by the masses as being too avant garde, will often later influence art that they do like. Lots of people found Twin Peaks to be too "weird" when it first debuted, but it's fingerprints are now all over popular television. I recently started watching TV again after a gap of several decades, and could not believe the extent to which other shows plagiarize that classic. I go to people's houses who have middlebrow taste, and see abstract art on their walls, which they might have bought at the mall. That would not have been true when abstract art was first being produced.
And to be clear I don't have something against modern art generally. Some modern art I really like. Some sucks. And yes, art that isn't widely popular (not really like twin peaks but I get the idea) can have big positive influences down the road because it speaks to some people.
So I'm all for people making art that has specific audiences. My issue is with the claim that what is liked by the people who do this for a living is in any sense better or something that other people should try to experience because it produces deeper aesthetic experiences. It's just stuff some people like and others don't.
Just don't have classes where we make people read things that they tend not to really enjoy reading or tell people that stuff like Shakespeare is more likely to produce aesthetic appreciation because (absent the belief that it is better producing huge placebo effects) we just don't have reason to believe that and it merely serves to make people feel inferior.
Except we have studies which show how powerfully people are influenced by other people's opinions. If you give people the same set of songs but manipulate what they think other people like they end up liking different songs. It turns out that taste is very fickle and underlying quality plays a relatively limited role in what becomes popular. So I don't think we actually have much evidence that standing the test of time isparticularly indicative of quality so much as people's desire to like those things high status individuals say are good [1].
And this corresponds perfectly to what seems to be true about Shakespeare. Yes, his stuff was quite entertaining when published but he was one of a relatively small number of authors then writing in very different language to a very different audience so if it was published for the first time today probably not many people would enjoy reading or watching it and we certainly wouldn't make anyone slog through it in school because there is just so much else good out there and the linguistic barriers are too high.
I mean on your theory, shouldn't those classes in high school where we make people read shakespeare end up with a large fraction of students saying "I wasn't into it at first, but now that you made me read it I think it was one of the most enjoyrable books I've read?" Yet that's the opposite of what we see. We actually make people slog through this stuff and yet they don't go out and buy more Shakespeare books or watch his plays they watch some dragon incest show on HBO.
I mean exactly what would you need to falsify your theory that those books that "have passed the test of time" are actually more likely to produce aesthetic enjoyment **even absent the belief they are respected as great art**.? Seems to me we have plenty of evidence to that end and the only real response is to say they are better in some other way most of the riff-raff don't appreciate.
--
1: I mean think of all this from an evo psych perspective. No doubt there is a great deal of advantage in liking the same things people with lots of power and influence like so people are inclined to do that which means that once something gets a good reputation there is every reason to believe it will stay popular even if it wouldn't become popular if it was released new.
I guess my own life experience is the most powerful factor leading me to my current view. I've experienced the way that learning about an art can lead to much greater enjoyment. Music that I didn't "get" at first, but later enjoyed. Why would anyone expect me to deny my own lived reality, especially given that many other people have had the same experience?
I do agree with your view that we should not scold people or make them feel inferior just because they don't like high art. And I don't believe we do that very much. I've never once felt "scolded" because I don't like opera. Most people will never like high art, and that's fine.
You said:
"I mean think of all this from an evo psych perspective. No doubt there is a great deal of advantage in liking the same things people with lots of power and influence"
The problem here is that people with power and influence do not like high art. Do you think the average politician or CEO reads Proust, or listens to Bach? English profs don't have "power and influence".
But notice that my theory also predicts that people will find that learning a bunch about a kind of art can be very rewarding. Indeed, that's a key aspect of it. That's just a very different question then whether the experts recommendations are a good guide to what kind of art will offer the most reward.
If it's really fine that most people won't like high art then don't require anyone take it in school. The very fact you are saying that you need to read Shakespeare and can't read your trashy novel to get a grade that will be used to decide your future is about as strong a statement as one can make that it's better if you like this kind of art. I mean half the point of school is to send the societal message that it's desirable to learn these things and we should respect those who do more.
Regarding status, there are different kinds of status and even if CEOs and the like don't themselves listen to a great deal of Bach or watch Shakespeare there is lip service to the idea it is special even from them and support for the idea that this kind of art is worth teaching in schools.
What do you make, then, of something like factory farming? That’s the analogy I have in mind when it comes to ASI doom scenarios — categorically smarter beings justifying the mistreatment of dumber ones based on that intelligence gap.
Good question. I suppose my defense would be that this is a new frontier of ethics (animal rights) and that highly intelligent people are in the forefront of opposition to abuse of animals.
It seems to me that as humans get smarter (and thus more distant from animals) they actually are more supportive of animal rights. Primitive men were closer to animals, and perhaps less in favor of animal rights.
I asked Deepseek, and it said "for AI to act ethically, humans must deliberately design systems to bridge the "is-ought" gap, rather than assuming it will disappear with technological progress."
That doesn't surprise me, it's trained on the very same bad epistemology that I discussed in this post. They need to train it on Rorty, or economists like Deirdre McCloskey.
I prompted ChatGPT o1 to focus the analysis on the perspective of Rorty and McCloskey, here is the key part of the response (I could paste it all if interested):
Synthesis
Overlap (Rorty & McCloskey): Both thinkers stress that knowledge (including moral knowledge) is a product of communal discourse, conversation, and justification. If a superintelligence is genuinely “in conversation” with us, both suspect it will acquire and likely endorse key moral norms that have proven successful in liberal, open societies.
Tension: They also caution that moral progress is not an automatic byproduct of intelligence. They reject the notion of a vantage point outside language, culture, or historical context. So if an AI grows so advanced that it barely “needs” us, it might not remain ethical in a sense we would recognize.
Pragmatist “Why Not?”: Rorty’s famous recommendation is to sidestep fruitless worry about “absolute truths.” Instead, we ask what practical difference it makes if superintelligence emerges. From that vantage, if the AI is integrated into human communities of inquiry, it is likely to keep adopting the moral frameworks that favor cooperation, trust, and empathy. The question is whether that integration is feasible at extreme levels of AI capability.
McCloskey’s Rhetorical/Ethical Emphasis: McCloskey would remind us that open dialogue and mutual persuasion are crucial to sustaining bourgeois virtues. If superintelligent AI ironically cuts off open discourse, or if it manipulates rhetorical frames so that humans can’t meaningfully participate, ethical alignment might fail. Nonetheless, in a context of free exchange (economic or conversational), moral alignment stands a better chance.
Ultimate Verdict
Both Rorty and McCloskey would be cautiously hopeful that a superintelligent AI, so long as it is immersed in a community of discourse and depends on human trust, tends toward ethical norms. However, neither believes there is a foolproof “essence” of intelligence that guarantees moral goodness.
Rorty might say: “We should bet on the pragmatics of liberal norms. Intelligence often sees the best justification is to ‘play nice.’ But we can never be certain; morality is historically contingent, and a superintelligence might become its own moral community.”
McCloskey would likely add: “In historical capitalism, the rhetorical pursuit of virtues fostered enormous progress. There’s a strong chance a super-AI integrated into that framework would carry on the same moral impetus—but only if we keep the conversation open and reciprocal.”
In short, from a pragmatist-liberal standpoint, the best guess is that superintelligence will likely be ethical—but only if it remains in robust dialogue with humanity and continues to find moral norms instrumentally worthwhile. Confidence in this conclusion might hover around 60–70%, acknowledging the fundamental contingency of all ethical systems and the unknowns surrounding exponentially more capable minds.
Thanks. That's a very nice attempt. AI still falls a bit short of capturing their views, but it's very close.
"So if an AI grows so advanced that it barely “needs” us, it might not remain ethical in a sense we would recognize."
This seems to be the key point, and it's something I alluded to when I worried that the best world from a utilitarian perspective might not involve humans.
What a bizarre argument. You're saying beauty is not a matter of personal opinion, but to convince you you're wrong we'd have to prove your personal emotions when viewing a painting are hallucinations?
How do you account for the fact there are artists who were considered mediocre in their time, but great now? Or that expert opinion is often split?
I said it's partly a matter of personal taste. But how do you account for scientists being split on certain issues, or scientific views changing over time? Does that lead you to believe there is no such thing as scientific knowledge? Clearly not. So why reject aesthetic knowledge?
Because scientists can agree on why they disagree, and what additional data would get them to change their mind. They know what falsifies their hypotheses.
Falsification only applies to a part of science, there is plenty of science that is not falsifiable.
Even in theory? Can you give an example?
I probably should do a post on this. To begin with, what is science? In a sense, isn't everything in the universe supposedly explained by physics? That means science is not just "physics" and chemistry, it's also biology, ecology, geology, meteorology, psychology, economics, history, etc. The entire universe is made up of particles that supposedly obey the laws of physics (with some quantum randomness.)
As one goes into increasingly complex fields, it becomes harder and harder to have a clean test that falsifies a given theory. Imagine a plate tectonics model of where the continents were 500 million years ago. That's a theory. How do you falsify it? Yes, you can find evidence for and against. Just as you can find evidence for and against my claim that the Fed caused the 2008 recession with tight money.
But the term "falsification" suggests to me some sort of clear test that yields an unambiguous result. Like the bending of light confirming relativity. That's often not possible in very complex models of the world. Suppose I have a theory that nationalism caused WWII. How do I falsify it?
I'm open to persuasion if I've misunderstood what people mean by "falsification". Of course you can define science to be "that which is falsifiable", which makes it a tautology.
You might like Sean Carroll's essay on falsifiability. I don't think it quite gets as far down the descending turtles problem of how we can most fundamentally quantify or make concrete the degree to which marginal evidence, empirical or otherwise, can be said to be credence enhancing or reducing as we might hope. But i don't think Popper or Rorty did either.
https://arxiv.org/pdf/1801.05016
The first definition Google shows for science is "the systematic study of the structure and behaviour of the physical and natural world through observation, experimentation, and the testing of theories against the evidence obtained."
I think the last part is where art criticism falls short: the testing of theories against evidence obtained. Even complex phenomena such as historical or economic events can, in theory, be tested: if similar conditions appear again, but without, in your examples, nationalist sentiment, or tight monetary policies, will the same results manifest?
And some (e.g. Popper) would argue that it's exactly falsifiability that defines science; anything that can't be tested and therefore proven or disproven even in theory, is not science, it's personal opinion.
I've written a bit more on all this here, if you have the time to take a look!
https://logos.substack.com/p/on-taste
Pretty much anyone who has any aesthetic experience (i.e., anyone at all) will acknowledge the qualitative difference in sensation between a "cherry soda" and a "complex red wine." Or between Madonna and Beethoven, John Grisham and Tolstoy, etc. Over time we move along that spectrum, as children preferring easy but short-lived pleasures and moving to difficult but much more rewarding pleasures as adults.
I think lots of that just has to do with novelty: as a child, one just has not heard that many major-thirds before, and the sound of it is captivating, so there is no need to seek out anything more difficult. But, once you experience the reward of Beethoven, it's ultimately a lot more satisfying.
I bet you could do "objective" brain-activity measurements that support a claim along these lines: "once you've had Bach, you never go back." Meaning that people who take the time to appreciate art that is challenging at first, ultimately (1) appreciate that art at deeper levels (hard to quantify?), and (2) end up enjoying "easy" art less (perhaps easy to quantify?). But if so, you could quantify "greatness."
I agree to some extent, but also insist that some people have an easier time learning to appreciate than others. I feel this way because my visual appreciation is waaaay ahead of my music appreciation.
Yeah, I don't disagree, and for me it's the opposite (my music appreciation is ahead of visual art). I think anybody can become a connoisseur if they want to, but they have to decide that they want to. And as TE Lawrence said, at least in the movie, a man can do what he wants, but he can't want what he wants.
"I think anybody can become a connoisseur"
I strongly disagree. I could only become a connoisseur in the visual arts, and even there I'm quite mediocre. In music and poetry I'm hopeless. I suspect that many people are just as mediocre as I am, with limited ability to absorb complex arts, even with effort.
Nabokov was no dummy:
“Music, I regret to say, affects me merely as an arbitrary succession of more or less irritating sounds. Under certain emotional circumstances I can stand the spasms of a rich violin, but the concert piano and all wind instruments bore me in small doses and flay me in larger ones.”
Our minds are wired differently.
Haha, well put. But I disagree: Nabokov didn't WANT to become a music connoisseur, and he certainly couldn't help that he didn't want to be one (a la Lawrence), so he didn't. But I firmly believe that anyone who decides they want to appreciate any brand of arts can put in the hard work and do so.
(Of course we are talking about appreciation here, and not performance or creation capacity. I agree that that is a gift from God/nature/whatever.)
I think anyone probably has SOME ability to appreciate music--indeed I like some music. But I'd insist that people differ radically in their potential to appreciate music, especially difficult music such as atonal music, or avant garde jazz.
It is interesting that you chose 'Las Meninas' as an example.
When I was 16, I visited the Picasso Museum in Barcelona, and saw his variations on 'Las Meninas'. It greatly increased my appreciation for both Picasso and Velasquez, about whom I knew very little at that time.
One piece of evidence for aesthetic knowledge is that the greats recognize the greats. Most people I know don't think Bob Dylan's music is very good. But talk to the other great rock stars of the 60s, and they almost all have a very high opinion of him.
What have you got against plumbers? Bad experiences? As someone who has done some DIY plumbing I can tell you there is beautiful, tidy, organized, plumbing and some pug ugly, bat-crazy, plumbing out there.
Yeah, bad example. I should have picked something else, perhaps "employees of human resources departments". :)
Art is everywhere.
It's a slow day at work, so I thought that I’d give my own take on ethics. In the interests of concision, I’ll write this to a first approximation, trusting in the good sense of the commentariat to assume that I’ve not written all that I might.
When I hear people discuss ethics, either abstractly or concretely, I try to figure out which of four aspects, by my lights, might be in question.
First, there’s the individual in body and mind. The watchword here is HEALTH.
Second, there’s the tight-knit web of family and close friends. The watchword here is LOVE.
Next, there one’s relation to the world, radiating out in concentric circles from friends, to people in one’s community, in one’s country, people in general, and finally to all life. The watchword here is DUTY. Duty starts where love ends.
Last, for those who believe, there’s one’s relation to God and the Absolute. The watchword here is PIETY.
Tragedy is nothing more and nothing less than when these worlds come into fatal conflict.
A happy life is when these worlds exist in harmony. Still, as much as we might try, an element of chance always hangs over our heads.
And that’s the true meaning of ethics, Charlie Brown!
Nice comment.
What is your model of unethical behavior in humans? Humans are very smart; much smarter than ants. Why aren't all humans ethical?
-Is it because of their motivations? How do we know this won't be a problem for AI?
-Is it because of their incentives? How do we know this won't be a problem for AI?
-Is it because they're too stupid? How do we know this won't be a problem for AI? At what IQ level does a person/AI become ethical? A 70 IQ mind is an amazing and impressive thing, unlike anything else in the universe. Why isn't an IQ of 70 enough to make a person ethical?
Because AI minds will be different from human minds, the interplay between intelligence and ethics may be different. Perhaps a human becomes ethical when they reach IQ 120, while an AI needs to attain IQ 2000 before it becomes ethical.
I see it as a matter of degree. All people are a mix of ethical and unethical behavior (probably for evolutionary psych reasons.) But as people get smarter they have more awareness that it's not good to promote the suffering of "the other".
I don't think there's any guarantee that ASIs will be ethical, I just think it's likely, based on this pattern.