I always have a hard time planning out the Halifax Examiner’s subscription drives.
Sometimes we land on some hook — Top 10 lists, essays from readers, our favourite articles, highlighting reporters — but it doesn’t feel right to recycle those. So a lot of times, we just do a sort of “er, you should subscribe to the Examiner” mishmash, and leave it at that.
This month, we’re going to do something different.
We were talking on the Examiner Slack a couple of weeks ago, and as we do, we were ragging on all things AI. That’s when Iris suggested we use that subject as the focus of our subscription drive.
The more I thought about, the more I thought that was an excellent idea. The collective Examiner understanding of all things “AI” gets to the heart of why people should support i…
I always have a hard time planning out the Halifax Examiner’s subscription drives.
Sometimes we land on some hook — Top 10 lists, essays from readers, our favourite articles, highlighting reporters — but it doesn’t feel right to recycle those. So a lot of times, we just do a sort of “er, you should subscribe to the Examiner” mishmash, and leave it at that.
This month, we’re going to do something different.
We were talking on the Examiner Slack a couple of weeks ago, and as we do, we were ragging on all things AI. That’s when Iris suggested we use that subject as the focus of our subscription drive.
The more I thought about, the more I thought that was an excellent idea. The collective Examiner understanding of all things “AI” gets to the heart of why people should support independent news organizations that consist of real reporters doing real work producing real news and real analysis of the world. I’ll circle back around to this point by the end of the month.
But today, I start with a simple observation: people don’t even know what they’re talking about when they use the term “artificial intelligence.” Take, for instance, the very first question from the recent federal survey about Canadian’s views on AI, which read:
How does Canada retain and grow its AI research edge? What are the promising areas that Canada should lean in on, where it can lead the world? (i.e. promising domains for breakthroughs and first-mover advantage; strategic decisions on where to compete, collaborate or defer; balance between fundamental and applied research)
The survey starts with the unwritten assumption that everyone knows what “AI” is. Do they?
The next time someone in your life starts extolling AI and how it’s the future, yada, yada, ask them what they’re talking about. Ask them what artificial intelligence is, what they mean by it, how it works. Nine times out of 10 they’ll either look at you blankly or say something so broad about computers that it’s meaningless.
Here’s how I responded to the survey question:
This question is framed with the assumption that an “AI research edge” is both a meaningful statement and a worthwhile goal in itself. I don’t know how you can start a survey about “AI” without first defining the term. Without defining the term, you are allowing “AI research” to mean anything the government of Canada wants it to mean. It’s a non-starter for critical thinking or meaningful assessment of policy.
So today, I dive into the meaning and history of the term “artificial intelligence.” My hope is that we can at least get to a point where readers of the Halifax Examiner have some shared understanding of what it is we’re talking about so we can have meaningful conversation about it.
What is ‘artificial intelligence’?
Computers are useful tools. At their essence, they speed up computations. Of course, as with so many other tools, clever people began using them for warfare.
Specifically, when World War 2 began:
In September 1940, with the German air raids over Britain at their peak, M.I.T. mathematician and physicist Norbert Wiener wrote Vannevar Bush, in charge of American war research, and volunteered his services. Over the next few years, Wiener focused on making what he called the anti-aircraft predictor, a computational device designed to improve the accuracy of ground-based gunners by calculating more precisely the location of enemy aircraft at a given point just moments in the future. The success of the project was in part dependent on the type of enemy that Wiener imagined was piloting the airplane.
This was indeed clever, explained Peter Galison, a History of Science prof at Harvard:
Wiener’s ambition was to make a black box model of the enemy pilot and then use that to form an anti-airplane system that could characterize the pilot’s movements and learn from past experience in order to predict where the anti-aircraft gun should aim. Some prediction was necessary because it took up to 20 seconds before anti-aircraft fire reached the airplane. So if you shoot at where the plane is now, you’ll surely miss it. If you extrapolate to where it might be if it were to travel in a straight line, you’ll also miss it if the pilot moves from side to side. So it was necessary to be able to predict where the pilot would go, and that’s what Wiener’s machine was designed to do.
What Wiener did that was so unusual was that he characterized the motion of a particular plane using a primitive computer. The radar follows the plane, looks at its motion, makes a statistical characterization of what the pilot’s been doing in the last several tens of seconds, and then uses that knowledge to predict where the pilot will be 20 seconds later. …And then it could be blown out of the sky…
[H]e never got it to work more than a couple of seconds in advance; 20 seconds out or even 10 seconds out was too far. But the scientific administrators cleared to see Wiener’s device found that its predictions for even those few seconds into the future were remarkable; more than remarkable, positively uncanny. The machine appeared to anticipate a person’s intentions, to stand in for human intentionality in some fundamental way. It seemed astonishing to Wiener, astonishing enough to merit the foundation of a new science.
Wiener dubbed his new science “cybernetics,” and through the war Wiener came to believe that cybernetics could one day predict human psychology more broadly. He had entire teams of engineers working under him, and was given significant budgets for research.
After the shooting war and into the Cold War, Wiener owned the field of Cybernetics. He would show up uninvited and unannounced at cybernetic conferences to control the conversation, and therefore the science.

Enter computer scientist John McCarthy, then at Dartmouth College. Writes Amber Case:
Simple: Come up with another term, and use *that *to advertise the conference.
“One reason for inventing the term [AI] was to escape association with cybernetics,” as McCarthy once bluntly explained. “I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him.”
So in a very real sense, AI started out as a term of *distraction, *not clarity. Created to position itself apart from cybernetics, it was coined so broadly that roughly *any *automated computer system can be called artificial intelligence.
What exactly does “To proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” actually mean?
‘Every aspect of learning’? We have some inkling about some features of how some humans learn in some circumstances, but we’re a long, long way from truly understanding learning.
And ‘intelligence’? Forget it, Jack, no one can agree on anything about it.
One thing I think we can agree on, however, is that there is nothing ‘intelligent’ at all in anything called ‘artificial intelligence.’
But there we have it, the term ‘artificial intelligence’ was created as a PR phrase to muddy the waters, to confuse people, and so anything and everything to do with computers can be shoe-horned into it. As practiced, ‘artificial intelligence’ means ‘the computer does something cool,” which is not at all useful.
Computers have always done cool stuff, and still do.
I would like to differentiate two different kinds of ‘cool stuff,’ though.
First, there’s a whole suite of truly useful computer tools that can be broadly described as ‘machine learning.’ Not to get too deep into the science of it, but machine learning programs are basically pattern recognition machines. They build a database of millions of items, and over time the machine recognizes a certain pattern as a particular thing.
Machine learning is what drives those apps that identify birds or plants or ticks. It’s what’s behind voice transcription software like Trint, which the Examiner staff uses to help transcribe interviews. It powers the grammar and spell-checking programs.
Impressively, it can quickly scan lots of X-rays to find potential cases of cancer.
An overview of the coding needed to read cuneiforms from the paper “CuneiML: A Cuneiform Dataset for Machine Learning.”
One use of machine learning that I find especially cool is the translation of cuneiform writing, the stone tablets of ancient Mesopotamia. Learning to read cuneiforms can take decades, and so there are only a few dozen highly specialized people who can do it. And so there are hundreds of thousands of cuneiform tablets that have never been translated.
This 2023 academic article explains how machine learning is translating cuneiforms. Note that the authors are careful about the limitations of the technology, and importantly they never use the term ‘artificial intelligence.’
But then Cornell University’s PR department issued a press release describing the research as “artificial intelligence”:
Deciphering some people’s writing can be a major challenge – especially when that writing is cuneiform characters imprinted into 3,000-year-old tablets.
Now, Middle East scholars can use artificial intelligence (AI) to identify and copy over cuneiform characters from photos of tablets, letting them read complicated scripts with ease.
Why would a PR person use a term that wasn’t in the original research? Quite plainly, because ‘AI’ has a societal buzz about it that the writer knew would draw more attention than the more accurate ‘machine learning.’
By 2023, ChatGPT had come out generating the enormous to-do about AI, but that wasn’t the first AI hype bubble, explained Amber Case:
The explosive growth of ChatGPT since 2022 has generated such excitement, it’s easy to forget another AI-related product enjoyed similar buzz less than a decade ago.
But it’s true: back in 2014, AI hype orbited around voice-activated personal assistants, with Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana leading the charge. As with other “truthy” technology, Hollywood turbo-charged this excitement, with Jarvis from the Iron Man movies and the Scarlet Johannsen-voiced AI depicted in *Her. *“Personal assistant AI is going to change everything!” briefly became the grand pronouncement for that time (almost 10 years from the date of this op-ed).
…
We do continue to use voice-driven assistants, of course, but in narrow contexts where it’s genuinely useful — like map guides while we’re driving. AI can be clever and witty in movies, but in real life we quickly realized it’s an anemic representation of the human soul.
It’s always the same pattern: every so often, new AI applications and play toys emerge from breakthroughs at research universities. These cause bursts of excitement among people at the periphery, particularly marketers and well-paid evangelists, who then create inflated expectations around them. These expectations blow way past what’s technically possible for these systems, but company heads ask engineers to do the impossible anyway. And the AI applications continue doing only what they’re good at: nerdy automation tasks. The public feels let-down again.
The voice trick was the first fuel to the broad AI-hype, but now it’s been joined by sound, photo, and video generation.
That brings me to the second kind of ‘cool stuff’ that computers do — generative AI. Distinct from machine learning, generative AI uses its database to predict the next in a series, and to simulate a creation of some sort — a text, a conversation, a photo, a video clip.
The predictions are based on what’s happened before. So the giant AI companies (OpenAI, Anthropic, Google, and a handful of others) are basically trying to build databases of all human creations — all written works, all music, all photos, all videos, etc. — which can be used as the basis for a prediction.
This is what the giant data centres being proposed are all about: collecting all information ever created in order to make their models operate better. This is, in a word, theft on a global scale, the biggest heist possible, with all human created work and all its accompanying copyrights being stolen for private profit.
To make up an example, if you type in “Donald Trump is…” the machine will look at its enormous database and see what is the most likely next word or phrase based on what other people have used in the past. Depending on the context of the rest of the ‘conversation’ and an analysis of what the mostly likely next thing said by people in the past in that context, the next word or words might be president or old or fat or a Republican or the saviour of America. There’s no thought going into this, no creativity, no insight; it is not intelligent.
The distinctions between machine learning and generative AI are important, but both have been labelled ‘AI,’ and so it’s difficult to have real conversations about them. The confusion is the point.
One distinction is economic. Machine learning programs can be extremely useful, but they cost money. Real money. Real money that people are willing to spend because the programs can bring important and useful results that make the world a better place.
In contrast, generative AI costs a tremendous amount of money, but no one is willing to pay real money for them, or at least not enough money to cover the full cost of them. ChatGPT, which is owned by OpenAI, loses money on every query, as does every other generative AI program. OpenAI, Anthropic, and the rest are losing enormous amounts of money, so enormous that it’s hard to appreciate the scale of it.
This is not the dot com bubble of the 1990s; it’s hundreds of times bigger than that. The big companies are collectively going to lose over a trillion dollars in coming years.
How is this possible? It only works so long as investors continue to pour money into the companies with the hope that one day they will actually turn a profit, and that hope is fuelled by a hype machine that is untethered to reality. Sooner or later, however, the financial reality will prevail; the question is only how much of the broader economy it will take down with it.
Another distinction between machine learning and generative AI is the environmental and social costs. We’ll get way into this in future articles in this series, but the environmental and social costs of generative AI are on the same scale as their monetary costs, and just as unsustainable.
A third distinction is between intent and expected outcome. When machine learning is used to find cancer on X-rays or translate cuneiforms, we have a pretty narrow range of completely reasonable desired outcomes and expectations. But generative AI? If we listen to its promoters, we see a much more cynical and broader agenda, which I can only characterize as evil. Again, the subject of a future article.
So after the first couple of non-responses to meaningless questions, in the end I found the federal government survey about ‘artificial intelligence’ unanswerable.
If the government wants to underwrite research into machine learning, sure, there’s always a case to be made for research that might make life better for people. But that’s not how it is being framed.
Rather, a confusing mishmash of ‘cool stuff computers can do’ is being described as the undefined ‘artificial intelligence’ and then sold as a transformative bridge to a glorious future of increased productivity, ‘innovation’ (whatever that means), and competition (against companies that are losing money).
The confusion and misunderstanding that is purposefully built right into the term ‘artificial intelligence’ is a power grab. If we don’t know what’s being discussed, we can’t oppose it. And if we find some kernels that we might sorta kinda think are good, then we have to support the entire mishmash of nonsense terms.
This is not how democracy works.
I hope that you’ll find this month’s exploration of ‘artificial intelligence’ interesting and helpful. We have some interviews with fascinating people lined up, and I’ve asked the entire Examiner crew to bring their own insights and concerns to the project.
If you support this work, please support the Halifax Examiner with your subscription or donation. Financial support from readers is what makes this operation possible.
Thank you.
(To send or post this item, copy the website address at the top of this page.)
NOTICED
Time change
The clock tower on Citadel Hill in June 2021. Credit: Zane Woodford
As I was researching the Daniel Sampson story and the general social and societal context of the 1930s, I stumbled upon the bizarre history of Daylight Saving Time in the Maritimes; I mentioned it in passing in a footnote in Part 4:
In this era, there was much confusion around Daylight Saving Time (DST). Every city, town, and county, passed its own legislation on DST; the various jurisdictions started and stopped DST at different dates, or refused to adopt it at all. One year, the city of Halifax had two different time regimes for a period, as DST was applied in the populace generally but not in city offices. In practice, including in the documents cited in this article, no one is quite sure about whether they are using Standard Time or Daylight Saving Time. Which is to say, without further scrutiny, any time offered is uncertain.
As I recall, towns in the Annapolis Valley were very much opposed to DST, and refused to adopt it entirely, and one year there was a particularly nasty political dustup over it in Truro.
You could conceivably drive from Halifax to Truro to Pictou and travel through five or six different time regimes, and then a month later travel back through five or six different time regimes. I don’t know how anyone made sense of it at the time. I don’t know when the time change was made a provincial matter.
It wasn’t the point of my research, but I probably should have better documented all the various decisions and time regimes. Perhaps it will be a future project, whenever I happen to find the time for such a distraction.
I was thinking of this when I woke up at 2am and couldn’t fall back asleep for several hours. I know I need that last blast of REM or I’ll be shit when I start writing, so I had to wait around to fall asleep again and then overslept.
Which is to say I’m not a fan of the time change. Pick one time or the other, or something in between, I don’t care, but whatever it is, stay with it. Twice a year I’m off-kilter for a couple of weeks because of this craziness.
(Send this item: right click and copy this link)
THE LATEST FROM THE HALIFAX EXAMINER:
‘Accessibility is the first thing to go’: Committee gives input on snow clearing in Halifax
A plow operator works in Downtown Halifax on Tuesday, Feb. 14, 2023. Credit: Zane Woodford
Suzanne Rent reports:
Members of Halifax regional council’s accessibility advisory committee gave their feedback on issues around winter operations in Halifax Regional Municipality (HRM) as part of a five-year review of service standards.
Committee members expressed concerns about safety, communication, and expectations around snow clearing on municipal streets and sidewalks.
…
As we reported in June, Halifax’s auditor general Andrew Atherton found that the municipality is doing a snow job on winter clearing operations. Atherton’s report found that there was no monitoring of work done by outside contractors, and that improvements need to be made to in-house operations.
Click or tap here to read “‘Accessibility is the first thing to go’: Committee gives input on snow clearing in Halifax.”
(Send this item: right click and copy this link)
Government
City
Monday
No meetings
Tuesday
**Halifax and West Community Council **(Tuesday, 6pm, City Hall and online) — agenda
Province
Monday
No meetings
Tuesday
Community Services (Tuesday, 10am, One Government Place and online) — Supporting Healthy Families; with representatives from the Department of Opportunities and Social Development and Maggie’s Place: A Resource Centre for Families
Human Resources (Tuesday, 1pm, Province House and online) — Capital Plan Updates on School Development and Maintenance Initiatives & Appointments to Agencies, Boards and Commissions; details
On campus
Dalhousie
Biochemistry & Molecular Biology Seminar (Monday, 11am, Theatre A, Tupper Building) — Jena Barter will present “Investigating the Role of the Cholesterol Transporter, STARD3 in Estrogen Receptor-Positive Breast Cancer Cells;” Abby Edison will present “Assessing the role of BAF subunit genetic variants implicated in neurodevelopmental disorders using Drosophila-based functional assays”
Noon Hour Free Live Music Series: Woodwinds (Monday, 11:45am, Strug Concert Hall) — details
King’s
So You Wanna Write a Book (Monday, 7pm, online) — free webinar
NSCAD
Reception (Monday, 5:30pm, Anna Leonowens Gallery) — new exhibitions
Literary Events
Monday
Author reading (Monday, 7pm, R.P. Bell Library, Mount Allison University, Sackville N.B.) — and Q&A with Renée Belliveau, A Sense of Things Beyond
Tuesday
Readings at the Woodside (Tuesday, 7pm, Woodside Tavern, Dartmouth) — details
In the harbour
Halifax 05:30: MSC Kilimanjaro IV, container ship, arrives at Pier 41 from Montréal 06:00: Tropic Hope, container ship, arrives at Pier 42 from Philipsburg, St. Croix 13:00: IT Intrepid, cable layer, sails from Pier 9 for sea 15:30: Zim Atlantic, container ship, arrives at Fairview Cove from Valencia, Spain 20:00: MSC Kilimanjaro IV sails for sea 21:30: One Falcon, container ship (146,287 tonnes), arrives at Pier 41 from New York
Cape Breton 05:00: CSL Kajika, bulker, sails from Aulds Cove quarry for Cape Canaveral, Florida 13:00: Radcliffe R. Latimer, bulker, moves from Mulgrave to Aulds Cove quarry 13:00: John J. Carrick, barge, with Leo A. McArthur, tug, transits through the causeway en route from Montréal to Halifax 21:00: CSL Tacoma, bulker, arrives at Coal Pier (Point Tupper) from Baltimore
Footnotes
Now that sportsball is over we can all get back to hating on Toronto again.