bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

Although I’m a member of the oldest generation and a bit shaky on computer nuance, you wouldn’t consider me a Luddite – a person who by nature opposes new technology. 

I’m no particular fan of the legendary English weaver Ned Ludd, who in the 18th Century supposedly smashed some new-fangled knitting machines and railed against job-threatening innovation.

On the contrary, I depend heavily on my contemporary iMac for researching the periodic ramblings that appear here.

In Ned Ludd’s favor, though, I do admit to scratching my head over the suddenly ubiquitous presence of Artificial Intelligence, wondering what precisely it is and where it might take us.

This requires careful treading, since history shows that many innovations now serving us well were ridiculed as suspicious or useless when they first appeared.

Even coffee, the brew essential to many of us for starting the day, was once suspected of producing “radical thinking.” Long considered by many “the bitter invention of Satan,” coffee was absolved by Pope Clement VIII, who enjoyed it so much that he once exclaimed, “This devil’s drink is so delicious … we should cheat the devil by baptizing it.”

Television in its infancy was derided by an obviously conflicted Darryl Zanuck, co-founder of 20th Century Fox. He ’s been quoted as saying that TV “won’t be able to hold any market it captures after the first six months. People will get tired of staring at a plywood box every night.”

Another conflicted executive, William Orton, president of Western Union, once called the emerging telephone an “electrical toy” that has “too many shortcomings to be seriously considered as a means of communication.”

Bicycles also produced their share of worry – in 1894 the New York Times reported that rising mental illness in England “points directly to bicycle riders… there is not the slightest doubt that bicycle riding leads to weakness of mind, general lunacy, and homicidal mania.”  

In 1903, a bank president advised Henry Ford’s lawyer not to invest in the new Ford’s Motor Company, declaring, “The horse is here to stay but the automobile is only a novelty – a fad.”

Long before Amazon became a household word in the marketplace, Time magazine confidently predicted in 1966 that “Remote shopping, while entirely feasible, will flop.”

In 1995, Robert Metcalfe, founder of a computer-related company, predicted, “the Internet will soon go spectacularly supernova and in 1996 will catastrophically collapse.”

And in 2007, a Bloomberg writer predicted, “The iPhone is nothing more than a luxury bauble that will appeal to a few gadget freaks. In terms of its impact on the industry, the iPhone is less relevant.” 

This brings us back to considering Artificial Intelligence, and how we should balance the good it can obviously do with the mischief it can easily spread.

For sure, we can use some intelligent help in our modern world, which shudders every day with mass shootings, war, racism, cultist leader wannabes, homophobia, book bans, and bitter philosophical divisions.

So, what exactly is this “Artificial Intelligence?”

The definition I’ve found easiest to understand says it’s the science of making machines that can think and act like humans.

Uh-oh.

Gerry Goldstein (gerryg76@verizon.net), a frequent contributor, is a retired Providence Journal editor and columnist.

Gerry Goldstein, an occasional contributor to What's Up, is a retired Providence Journal editor and columnist who has been writing for Rhode Island newspapers and magazines for 60 years

Leave a comment

We welcome relevant and respectful comments. Off-topic comments may be removed.