Magazine Home
Ridiculous AI
The Computer Corner

Español
December 21, 2025

by Charles Miller

Over breakfast recently, two friends and I were comparing experiences using AI chatbots. The first friend related how he had asked the highly-regarded AI chatbot named "Grok" to provide design assistance with some basic geometry calculations. Grok's answer skipped over basic and jumped into some complex spherical trigonometry. Drawing on his engineering background, my friend recognized that the answer from Grok was unnecessarily complex and probably not correct. When he pointed this out, the AI chatbot answered "You're right" and even praised him for his common sense, then it provided a different answer.

Coincidentally, that same week I had had a similar experience with ChatGPT. I was working with a database in which some of the data was time entries expressed as a number of seconds. Rather than listing the time value as "763" seconds I wanted to change that to display as "00:12:43" (zero hours, twelve minutes, and 43 seconds). This is something I had done years ago, but had forgotten how I did it, so I asked the chatbot to display the Microsoft Excel spreadsheet formula to do this formatting. ChatGPT responded with instructions for how to do this reformatting with four complex formulas, apparently one each for hours, minutes, and seconds with a fourth formula to consolidate the results of the other three formulas. I will not print them here because it would take half a page. I wrote back to ChatGPT saying "That's ridiculous. This can be done with one short simple formula." ChatGPT answered "You're right" and then said the formula is "=A1/86400 formatted [h]:mm:ss."

At that breakfast I told my friends of another occasion back in January when I asked ChatGPT if there was an historical precedent for U.S. Presidents skipping the traditional coffee at the White House before going to the Capitol for the inauguration ceremony. ChatGPT answered (paraphrasing) "Yes. In 1953 President-elect Eisenhower slighted President Truman by refusing to enter the White House for coffee then they met later at Blair House for coffee and sandwiches."

As someone who has read at least a couple biographies of each of those Presidents, I recognized the answer from ChatGPT could be considered technically accurate, but it was grossly misleading. We all expect such deceitful doublespeak of politicians but not chatbots. The answer from ChatGPT omitted the facts that the two presidents did not speak for a decade after that 1953 incident, and that the meeting over coffee and sandwiches did not take place until 1963 when the two ex-presidents had just attended the funeral of President Kennedy.

The large language model used by ChatGPT might have "read" all the same biographies I read, but it does not seem to have much if any comprehension of temporal context. Its answer to my question certainly did nothing to convey the deep animosity that existed for years between Truman and Eisenhower. Similarly, with regard to the prior example about formatting in an Excel spreadsheet, I cannot say the ridiculously and unnecessarily convoluted answer ChatGPT first gave me was wrong. I never tried it. I knew there was a much simpler answer, and when I asked ChatGPT a second time, it responded with the simpler one-line answer to the problem.

Likewise, my first friend whose example I quoted in the opening paragraph did not say the first answer he received from ChatGPT was wrong; he just recognized the need to ask the AI for a better answer and he got one. My second friend made the best comment when he voiced his hope that if architects and bridge builders were relying on ChatGPT to calculate load-bearing capacity of cantilevers they would not routinely accept the first answer they received.

None of us sitting at the breakfast table that morning was ready to stop using ChatGPT. This evolving technology represents a significant and potentially useful advance in computer science. We are just not ready to blindly trust everything it says.

**************

Charles Miller is a freelance computer consultant with decades of IT experience and a Texan with a lifetime love for Mexico. The opinions expressed are his own. He may be contacted at 415-101-8528 or email FAQ8 (at) SMAguru.com.

**************
*****

Please contribute to Lokkal,
SMA's online collective:

***

Discover Lokkal:
Scroll SMA's Community Wall below.
Mission

Wall


Visit SMA's Social Network

Contact / Contactar

Subscribe / Suscribete  
If you receive San Miguel Events newsletter,
then you are already on our mailing list.    
Click ads

Contact / Contactar


copyright 2025