Sunday, January 8, 2023

Instead of a couple hundred years, consider a couple billion

My friend and debate partner stayed up to watch the 14th and 15th votes for Speaker that I reported on yesterday. After he read my post he had a couple disagreements with my reporting. The first is that while Rep. Matt Gaetz did prevent McCarthy from getting enough votes on the 14th round, he wasn’t the reason why McCarthy did win the 15th round. I’ll defer to his witness. The second is my friend disagrees with my repeating the idea that the Freedom Caucus will be able to do significant damage when the votes for debt ceiling and federal budget come due. Friend said that “Kos, in its sad blindness, has it all wrong.” The calmer Republicans won’t let that happen. I reply that Kos is not the only voice sounding that alarm. Leah McElrath has said the same thing and has quoted several others with the same opinion. Also, from what these voices have said and the documents that have been posted, such as the text of the rules to be voted on tomorrow, the maniacs asked for and believe they have the power to use the debt limit and budget votes to hold Medicare and Social Security hostage. Here’s one of those voices. It is David Roberts, who runs Volts, which shares news about clean energy and politics:
And as always, I invite everyone to reflect on what the reaction might be if, say, AOC & 19 of her socialist colleagues insisted on the ability to tank the entire US economy if their demands for, say, universal healthcare are not met. Just let that scenario play out. There's nothing more characteristic of US politics than this: angry white people are indulged. They receive a degree of latitude, an assumption of good faith, & a permanent offer of forgiveness that are not extended to any other political faction.
We’ll see what pundits have to say after the rules vote tomorrow. I’ll add the maniacs could do a slow motion gut of Medicare and SS so it doesn’t affect those who are already of Medicare and SS age, but will significantly affect those who will retire in 10-50 years. With that I’m done discussing McCarthy – at least until after the rules vote tomorrow. Mark Sumner of Daily Kos discussed a disruption no one is talking about – AIs taking over a great number of American jobs. Sumner talked about an energy company that had a floor full of people who figured out how much money was to be paid to which landowner to fulfill the terms of 80K different leasing contracts. Those people were replaced with 100,000 lines of computer programs. This was before AIs. Now AIs can construct some pretty accurate images, given a basic description (though one needs to allow for a few really weird results). An example of input could be, “Robot riding a Vespa through Rome.” That means companies no longer need to keep a staff to create the images it needs. Sumner also demonstrated that an AI can write decent news stories.
The transition from pen to typewriter to word processor may have liberated writers from ink-stained cuffs and the joys of Wite-Out, but the fundamental writing process didn’t change. This is a change to the fundamental process. On the most basic level.
There’s a flood on the way that will sweep aside many jobs, including many high paying jobs. We need to prepare for this massive disruption. And nobody is talking about it. At the suggestion of Leah McElrath I read an article by Émile Torres about the philosophy of longtermism. Torres is a PhD candidate in philosophy at Leibniz Universität Hannover in Germany. He used to promote longtermism, but now points out its dangers. Here’s a hint – a lot of very rich people are paying big money to develop the idea. Longtermism is the idea that humanity needs to invest in things that will preserve its future. To many people that means considering how an action will ripple through the next several generations. But longtermism vastly changes the timeline. Instead of a couple hundred years, consider a couple billion. Under this timeline the goal is to avoid a human extinction event. Also under this timeline individual lives don’t matter. A great number of lives don’t matter as long as humanity doesn’t become extinct. The 40 million who died in WWII are just a ripple. Those who will suffer and die because of climate change (as long as it doesn’t lead to the collapse of civilization) are also just a ripple. The danger is that there could be a shift from the nonchalance of mass death to deciding that mass death – even of a billion people – is appropriate and justified if it prevents a collapse and the risk of extinction, that mass death is for the “greater good.” I add: Seeing the way the extremely wealthy treat those under them and after hearing that some of them want a smaller human population because fewer people would be easier to control I understand their approval of a philosophy that justifies allowing the death (killing) of people they don’t like. Of course, they assume they and their descendants will be included in the ones who go to the stars. Torres then gets into why longtermism is dangerous. He says the idea of longtermism is a threat to its own goals – “the only way to genuinely reduce the probability of extinction or collapse in the future might be to abandon the longtermist ideology entirely.” I didn’t follow some of his reasoning. Then I got to the part where the success of their goal is not measured in the happiness and quality of life of the human population, but only in its number. To handle the huge number they say is appropriate they assume colonizing the solar system and then the galaxy will be done. For that to happen they suggest the importance of prioritizing the lives of people in rich countries over those in poor countries. Those in rich countries wouldn’t have to deal with the drag of poor countries as they innovate the technological maturity needed to spread humanity among the stars. That means according to this philosophy the development of needed technology is more important than anything else, including biodiversity and quality of life. But our emphasis on technology is what got us into our current risky situation. To get out of that conundrum we need wisdom – which means using technology to enhance our brains and get us to being posthuman. Yeah, that looks like a recipe for disaster. Technology is more likely to cause our extinction before it allows us to escape to the stars. Yes, we should care about the long term existence of humanity. But longtermism is flawed in its beliefs and will increase and reinforce the risks that threaten us. Is the longtermism philosophy as dangerous as Torres says it is? Rich people are paying big bucks to develop the idea. And anything they approve of has shown to be bad for everyone who is not them.

No comments:

Post a Comment