Thursday, June 22, 2023

Making martinis out of glacier ice

Yesterday I wrote that Justice Samuel Alito wrote an op-ed in the Wall Street Journal to get in front of an exposé by ProPublica. In that op-ed Alito essentially confessed to corruption. Here’s some more responses. Leah McElrath asks
So do right-wing justices get assigned a billionaire or two after they’re appointed to SCOTUS? Is that how this level of corruption works? Because there seems to be a pattern.
Josh Marshall, while linking to the ProPublica article, said yeah, pretty much.
Alito's excuse here (the private jet seat would have gone unused otherwise) is for the ages. But you start to see the bigger picture which is that Leonard Leo basically pairs each new Justice w a billionaire sponsor family when they arrive.
Ruth Ben-Ghiat replied to Marshall:
Also Leo=Opus Dei which has propped up authoritarians for a century. Franco, Pinochet, Berlusconi (his liasion to Opus Dei, Dell'Utri, was also his liasion to the Mafia), Trump (Barr, Kudlow, Cipollone, and more).
Justin Elliott tweeted:
At least 3 rich businessmen have gotten access to Supreme Court justices by paying for their lux vacations. Justice Scalia got an Alaska vacation paid for by the same businessman who hosted Justice Alito. Here he [Scalia] is making martinis out of glacier ice,
Joan McCarter of Daily Kos reported:
Sen. Dick Durbin, chair of the Judiciary Committee, announced Wednesday that his committee will take up legislation next month to impose ethics reform on the Supreme Court. “The Supreme Court is in an ethical crisis of its own making due to the acceptance of lavish gifts from parties with business before the Court that several Justices have not disclosed,” Durbin tweeted, referring to the latest ProPublica story about Justice Samuel Alito. “The reputation and credibility of the Court are at stake,” he continued. ... Durbin did give a very broad hint to Chief Justice John Roberts; that Roberts could still avoid having Congress dictate how to do his job if he would just “take the lead and bring Supreme Court ethics in line with all other federal judges.” At this point, however, merely telling the justices they have to abide by the code of ethics that all lower courts have to live by isn’t going to be enough. Alito’s response to the story is enough to prove that. He didn’t even wait for the story to be published, but rushed to The Wall Street Journal to give a prebuttal, based on the questions he’d gotten—and refused to answer—from ProPublica.
McCarter discussed various ideas for reforming the Supremes. She then ended with:
In the immediate term, however, the reality of this court has to be dealt with. Even if Roberts relents and imposes a code of ethics, chances are good at least two of the justices will ignore it. Their influence has to be diluted, and the best way to do that right now is by expanding the court. The legislation to do just that exists now. That should be the next move on Durbin’s part, taking up that bill.
An Associated Press article posted on Kos reported:
The House voted Wednesday to censure California Rep. Adam Schiff for comments he made several years ago about investigations into Donald Trump's ties to Russia, rebuking the Democrat and frequent critic of the former president along party lines. Schiff becomes the 25th House lawmaker to be censured. He was defiant ahead of the vote, saying he will wear the formal disapproval as a “badge of honor" and charging his GOP colleagues of doing the former president's bidding. ... House Speaker Kevin McCarthy, R-Calif., read the resolution out loud, as is tradition after a censure. But he only read part of the document before leaving the chamber as Democrats heckled and interrupted him. “Censure all of us," one Democrat yelled.
That investigation into the nasty guy and his ties to Russia is a long and tangled case. It includes an investigation of the investigators. The article has some of the details. The news of last few days have featured several stories of the submersible with five people aboard on their way to see the wreck of the Titanic. On Sunday the reports said contact with the submersible was lost. Crews from the US, Canada, and other countries assisted in the search. Today the news was about a catastrophic implosion that killed all five. I extend my sympathy to the dead and to the surviving families. The week before a ship with a few hundred migrants capsized off Greece, killing most of those on board. I extend my sympathy to the dead and to the surviving families. Peter Brookes tweeted a cartoon of the two vessels and a caption over only the submersible saying “All-out international effort to save lives...” Chitown Kev, in a pundit roundup for Kos, quoted part of an essay by Richard Reeves for the Brookings Institution.
A few years back, I was delighted to see my godson wearing glasses. It makes me feel better to know others are aging too. Judge me if you like. “Don’t feel too bad, Dwight,” I said with faux sympathy. “It happens to all of us in the end.” Dwight laughed. “Oh no,” he said, “these are clear lenses. I just do more business when I’m wearing them.” Dwight sells cars for a living. I was confused. How does wearing unnecessary glasses help him sell more cars? “White people especially are just more relaxed around me when I wear them,” he explained. Dwight is six foot five. He is also Black. It turns out that this is a common tactic for defusing white fear of Black masculinity. When I mentioned Dwight’s story in a focus group of Black men, two of them took off their glasses, explaining, “Yeah, me too.” In fact, I have yet to find a Black American who is unaware of it, but very few white people who are. Defense attorneys certainly know about it, often asking their Black clients to put on glasses. They call it the “nerd defense.” One study found that glasses generated a more favorable perception of Black male defendants but made no difference for white defendants.
I accumulated a bunch of tabs about AI over the last couple months. I finally have time to write about them. Andrew Kadel posted a description of ChatGPT written by his daughter who has a degree in computer science.
When you enter text into it, you’re asking, “What would a response to this sound like?” If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who’s written things related to your question, it’s not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing! But people keep wanting the “say something that sounds like an answer” machine to be doing something else, and believing it *is* doing something else.
John Cole tweeted a cartoon showing a couple watching TV. She says, “I’m not sure worries me more: artificial intelligence...” The scene pulls back to show on the TV is Marjorie Taylor Greene. The woman continues, “...or flesh and blood ignorance.” Pedro Molina tweeted a cartoon of a robot holding a newspaper page with the headline “Freedom Caucus Threats.” The robot says, “You are focusing too much on the dangers of artificial intelligence and not enough on those of natural stupidity.” Mark Sumner of Kos discussed a Republican ad that came out at the beginning of May and what is so scary about it.
It wasn’t just that the ad used AI imagery. It was that nothing–absolutely nothing–in that ad had anything to do with the real world. Not one of the morbid fantasies in which the GOP indulged themselves was in any way an extrapolation of Biden’s policies. It wasn’t just fake images, it was fake images spawned out of wholly fake claims designed to keep Republican voters properly frightened and enraged.
This is only the tip of the flood that is to come. Some experts can tell which images are fake. But 99.9% can’t and won’t try to tell they’re fake before passing them on. Sumner created a rebuttal to the ad. He spent about five hours and $10. He could have finished it faster if he didn’t add a note at the end saying it is fake – but what the nasty guy has said he would do if he returns to office. I watched the video – it’s pretty effective. Shannon Bond of NPR reported that seven years ago Elon Musk said in an interview that some Tesla models are quite good at autonomous driving. Recently, a man died when his Tesla crashed while using the self driving mode. The family sued, citing that 2016 interview. The article said:
But the unleashing of powerful generative AI to the public is also raising concerns about another phenomenon: that as the technology becomes more prevalent, it will become easier to claim that anything is fake.
Tesla lawyers pushed back, saying just that – the video is a fake. Judge Evette Pennypacker didn’t buy the claim.
"Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune," she wrote. "In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do. The Court is unwilling to set such a precedent by condoning Tesla's approach here."
However, a jury may demand more verification whether evidence is real or fake. An AP article posted on Kos discussed AI on the campaign trail:
The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.
Some of the tactics mentioned: A faked famous person calls you to urge you to vote for a particular candidate. A candidate manipulates a TV journalist’s reaction. The video full of fake images mentioned above. A doctored video of the opponent attacking his base. Fake images of children in libraries learning about satanism. The nasty guy being arrested. In another article Sumner wrote:
On Monday, a pair of AI-generated images appeared on social media platforms Twitter and Telegram. One of these showed what was reportedly a large explosion at the Pentagon. The second, posted a few minutes later, showed what was reported to be a separate explosion at the White House. Both of these images were swiftly reposted thousands of times on both platforms. ... Within a few minutes, The Street reports the S&P stock index lost more than $500 billion. Most of that value gradually returned over the next few minutes as it became clear the pictures were fake. They had been generated by an AI art program. ... Two fake, easily refuted images made $500 billion vanish. Next time, the images could be more plausible, the distribution more authoritative, and the effect more lasting.
In a third report Sumner included a short statement put out by the Center for AI Safety and signed by 1,100 AI researchers in prominent universities.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Yes, we should pay attention to why these experts are worried. Sumner discussed a great AI success. Each of our genes, about 20K to 35K, creates a particular protein. What that protein does depends on how it folds. Alas, even when knowing every amino acid in the protein predicting how it folds is tremendously complex. Even supercomputers have a hard time with it. Many complex models could not accurately predict how an unknown protein folds. Then in 2020 the company DeepMind reported their AI program AlphaFold could predict the folding of all proteins. Not just human proteins, but all 200 million proteins in earth’s biome. And the program’s solutions are a close match for experimental results. This is a tremendous success. But because of the way an AI learns, how it comes up with a solution is unknowable to humans. An AI could be trained to produce potential new drugs, opening a wondrous new era of medicine. Or an AI could be trained to produce harmful toxins, similar to Mad Cow Disease. But since the AI has a tendency to tell us what we want to hear, would it accurately tell us what it produced was a wonder drug or a toxin?
It doesn’t take a superintelligent general-purpose AI equipped with Skynet and an army of Terminators to pose a tremendous threat. The threat is there in a toolset whose value is so great that we can’t help but use it, and whose errors are so unpredictable that we can’t understand their source.

No comments:

Post a Comment