By Tony Farkas
I’ve been a patron of the internet since its virtual infancy as a means of information transfer medium. I’ve even been a computer nerd since the days of the Commodore Pet.
I’ve done Fortran and Basic coding on TRS-80s, messed with Windows and its attendant apps from version 3 on up, and even built my own computers starting with 486 machines through modern-day Pentiums.
When the Internet became a big thing, particularly as a purveyor of news articles, I became my newspaper’s first webmaster, and learned the ins and outs of the medium, even how to program in HTML.
All of this is to say that I’ve seen the trends in the computer age, from personal computers being little more than large solitaire machines to the world at your fingertips with smartphones.
There is a trend the industry is galloping madly toward that as someone who values truth and trust should be very afraid of, and that is artificial intelligence.
Current estimates show that about half, or 50 percent, of the content generated on the internet is AI, and in the not-too-distant future, it’s expected to go over 90 percent.
In the beginning, as an editor, it became apparent that the information being found on the web was suspect; we quickly came up with rules and expectations that the internet could not be the source of stories, and any story that would, say, cite anything from Wikipedia was immediately tossed back at the writer, since Wikipedia is user-edited and is extremely questionable.
As the AI grows and matures, there could be possibilities that AI could take information, such as news stories from a media outlet, rewrite them and post them on a competing site, effectively creating competition that is managed by software. News outlets would not be the only segment of the internet that would be affected by this, either, since developments may begin affecting photographs, sales pitches, even website and software creation (a la Mr. Smith in “The Matrix”).
If there’s additional and wrong facts introduced into an article, who bears the blame? The original creator? The owner of the new AI-driven site? Who can you sue for libel if a piece of software prints a fabrication?
There’s the idea that AI can be used in teaching and homework applications. How do you grade a book report written by ChatGTP, even if it could be spotted? If teachers use it to share resources, as some people suggest, who’s to say that the AI doesn’t introduce erroneous values into the search?
(As an aside, an article in the New York Times claims that the benefits of AI-assisted homework outweighs the risks.}
Moreover, the AI could have military applications, being the driving force behind security like ballistic missiles, and anyone has seen cautionary tales from “Colossus: The Forbin Project” or “The Terminator” will feel that little chill of fear running up their spine.
Knowledge is available from many sources, but it requires effort in order gain wisdom. To take away the search in favor of convenience will cheapen any advances that can be made, and failure will be met with shrugs.