| Sep 24, 2025


I get it, people are intoxicated with AI.

As advances in quantum computing have been escalating, Artificial Intelligence (AI) has becoming more and more viable.

According to the Google AI overview to the query “quantum computing and AI” the two are involved in a “symbiotic relationship, though still developing, holds the potential to revolutionise various fields, including healthcare, finance, and scientific research.”

Before going any further, a confession. I don’t really know what quantum computing and AI are beyond a superficial, nearly meaningless understanding. It’s like I read the headline of an article and am now passing myself off as an expert.

My understanding of AI, and this fits with the applications of AI that everyone seems to be using all the time, the Chat GPT’s et al, is that is based on the use of algorithms, with an evolutionary engine. The generations needed for an evolutionary change to take place in a physical organism, can be done in nano-seconds to text, image, videos, or any other information that can be contained in digital form.

I know there is much more to it than that, and that is the part that is changing the world under our noses, and potentially making whole professions irrelevant.

There are a couple of ways this can go. Because our world is ruled by money, and it will cost less to use AI for most things than to pay people to do those things, we will need less people and that means millions or billions of people will be phased out, either through attrition by means of lower birth rates, or through poverty and starvation.

The other, optimistic idea is that we will all become so much more productive that the major problems that have been impossible to address will become easy to address, and things will become better and better for all of us, and we will become happy and productive in ways that we can’t even imagine now.

But in the here and now, as we navigate these new AI tools, people have been asking some interesting questions about how news is reported.

I do not use AI in my writing, and I tell my reporters not to. If and when something ai generated us submitted for publication, I believe I can tell, but I may be wrong.

I have at least three concerns about AI generated content. I assume that I could train AI tools to duplicate my writing style, with access to all my articles online as source material. The problem is that any growth or shift in my style that may come from the writing process I undertake from week to week, would be halted as soon as I started using AI to replace myself. Would it make a difference if I told Chat GPT to use all my online articles as source material to replicate my writing styles, or if I told it to stop in 2015, or 2010.

My second, more relevant issue, is that AI never leaves the digital world. The most important aspect of reporting is gathering information from the real world, from sources other than what has already been written and stored on the Internet. That is what takes time, and that is the value of what is contained in a news source like a newspaper. The things that happen in the world are of interest to people living in the communities where those things happen. AI can not go there. It might be able to predict, to pretend, or to assume, but the world has a habit of doing it own, quirky thing.

The third issue credibility. Particularly now, in a social media world, the fact that there are some basic rules governing news media are more important than ever, at least I think they are. What is written in this newspaper is based on something that was said by someone, and that person can respond. They can say they were misquoted, their words were misconstrued, taken out of context. What is written is subject to verification.

Further, there is a name at the top of the articles written here, and that person is responsible for what is written. That is an important guarantee for the reader.

Because my name is at the top of this page, readers know who wrote it. They can do whatever they want with that information, but when things go wrong, there is someone to blame, someone to hold to account.

If I conduct an interview and record it, and then supply instructions to an AI tool to organise the interview in a certain way, to create a lede, and to construct a story, I expect it will do that for me, in an instant. I can then tweak it, fix it, put my stamp on it. All of this might save some time. But when I put my name at the top of the article, I don’t know that it is true that I wrote the article. I handed a bunch of decisions over to AI and they may not have made the same decisions as I would have. I would not be the writer, I would only be the editor. If I use AI tools to edit, it is different, but even there I must be careful.

Using AI without understanding its implications would compromise whatever legitimacy a newspaper has as a news source.

The real life, the juice of any story telling comes from the surprising, quirky, unpredictable things that people say and do. AI might be able to do all that by now, but call me old fashioned. I think that when something actually happens, when someone actually says something, it is more real than when it just seems like it happened.

This is the case, even if the thing that can be made to seem to have happened is done so well that appears as or more real than reality itself.

(The above was not written with the help AI tools, or so I would like you to believe)

Support local
independant journalism by becoming a patron of the Frontenac News.