A few minutes ago I looked up at the muted tv screen and saw the clip from the tech-snooper film Sneakers in which Robert Redford shuffles Scrabble tiles, attempting to decode a password.
I enjoy doing anagrams but I was pressed for time. So I asked ChatGPS to create an anagram from my full name. I'd say AI has a way to go in developing its brain. It gave me one that I then had to rearrange for it to make any sense. My finished product: "Charm in trip plan."
I appreciate that even that's a rough, subliterate way to say There's charm in trip planning. But it's close enough. (When I asked it to give me an anagram of the anagram, it provided not my name (nothing unusual about its spelling), but this: "Charming in part." Bingo! AI nailed it in that snarky, two-word addition.
It also cheated; in the first exercise it reduced the number of original letters. Kinda changing the rules of anagramming, huh AI?
The two points I drew from this experience: AI is somewhat less the electronic genius that's being huckstered to us, and in its manipulation abilities, the technology packs a lot of peril. To more thoughtfully expand on and substantiate the second point, I consulted Cambridge University's noted computer and cognition scientist Alan Blackwell.
In trying to sort out the very kinds of experiences I've had with AI, he referenced MIT's Professor Rodney Brooks, who assessed ChatGPT's "working principle." And it is this: "It just makes up stuff that sounds good." Blackwell observed that mathematically the assessment is correct. "'Sounds good' is an algorithm to imitate text found on the internet, while 'makes up' is the basic randomness of relying on predictive text rather than logic or facts." (Italics mine.) As noted, it even cheats.
Blackwell also observed that the "Godfather of AI," Geoff Hinton, had recently speechified that one enormous risk of the technology is not that it will become eerily "super-intelligent, but that [it] will generate text that is super-persuasive without being intelligent." Just like Donald Trump or Boris Johnson, he wrote, whose world is evidence- and logic-free.
So, asked Blackwell in 2023, if this gadgetry isn't about logic, what is its "scientific principle"? For an answer to that he looked not to mathematics or computer science but linguistic philosophy, as refined by the late Prof. Harry Frankfurt in his book On Bullshit. The bullshitter "does not reject the authority of truth, as the liar does," wrote the philosopher. "He pays no attention to it at all."
As a case study, Blackwell recalled "the astonishing behaviour" of Britain's highest-ranking leaders during the Covid pandemic. He reflected that Brits "wonder[ed] how we came to elect such bullshitters to lead us." There was no cause for wonder, though. He again turned to Frankfurt: "Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about."
In his speech, Hinton the Godfather tossed out an even more ecumenical jeremiad about operating systems that disregard what Trump and Johnson disregard: evidence and logic. These systems "could become our overlords by becoming superhumanly persuasive, imitating and supplanting the worst kinds of political leader." They'd reach down into every minute of our day.
I didn't mean to write at this length on the topic. I began only with curiosity about what ChatGPT could anagrammatically make of my name. (You may be curious about what it made of "Donald John Trump." This: "Damn John, Plutord." Now that I like — the seeming ominousness of "Plutord." I'll interpret it as one who's related to that dwarfish little freak of a planet, Pluto, which some time ago got kicked out of our solar system's polite society.)
I began cheerfully with a dram of whimsy and wound up in a dreadfully dark place of tech absurdities. But that's often the contradictory joy of writing more than a Twitter post.
Comments