The Trouble with Chatbots

The trouble with Chatbots is that they are either too dumb to be useful or so deceptively intelligent that their errors are cleverly disguised. Despite my favorable experience with my limited use of ChatGPT, most of the time I find service chatbots get everything wrong. Or the limited results are not so useful. Or my problem is too complex for the chatbot to understand. Even basic things get complicated with a chatbot.

When the weather finally seemed like it was staying nice, I contacted my dealer for a tire swap. By default I ended up with the automated system. A tire swap is a relatively straightforward request, so i thought it would be more efficient.

When prompted by the chatbot, I said something like “I want to change my winter tires.” The bot replied with something about scheduling an oil change. I probably used the word “change”, but never mentioned oil. I tried again, using different words such as “tire swap.” This time the bot replied with a service about a brake replacement. It was all very baffling.

I started requesting to speak with a person, but the chatbot kept offering to book me the next available appointment, without even confirming the service! Finally I got through to a real human who booked me in for something a lot sooner than the chatbot’s earliest date. It was all very weird.

On the other end of the chatbot spectrum are the powerful AI-driven chatbots such as ChatGPT. These chatbots can handle a range of highly sophisticated tasks including:

  • research
  • summarizing long complex works
  • imitating styles
  • creating deepfakes
  • performing analysis
  • and more!

And yet, they are prone to making up information, also called “hallucinating.” Sometimes they are accurate and factual. Other times they are believable, but made up. Discerning the difference is challenging. As things get more tangled using AI to perform tasks, the line between reality and Chatbot invented “reality” will becoming increasingly difficult. Soon people won’t know what to believe.

The promise of the chatbot is alluring. For some, it is already proving beneficial and helpful. But in some scenarios it is still lacking. The technology is either too sophisticated, or not sophisticated enough. In the near future, who knows? The service chatbots will likely respond accurately to inquiries. But can we ever trust the results from the more intelligent chatbots?

The End of Originality

I love hearing about the new options available with the rapidly advancing AI (artificial intelligence), such as ChatGPT. Every week I receive emails from a different company trying to sell me products based on generative AI. It seems the potential and opportunities for incorporating this new technology is limitless.

A few weeks ago I used ChatGPT to help me write part of a job description. I did several searches and selected the best results to get something close to what I needed. Then I did some customizing. I have to confess, the jump start was pretty nice, especially for something I don’t do regularly. Completing 80% of the work completed with a few simple search queries was efficient.

I enjoyed using ChatGPT to help me out. After, I spent a bit of time fantasizing about other jobs for this kind of an “assistant.” While it is tempting to think about ChatGPT (or equivalent) one day helping me churn out blog postings or chunks of my resume, I had to wonder, is this the end of originality? If ChatGPT analyzed my whole blog (over 10 years of postings) and started churning out similar-sounding posts, would it still be my work? Would my originality and creativity still be a part of it? Even if I customized the end result, most of it would have been crafted by a machine.

I’ve been following the writers’ strike off and on for a few weeks. I know one issue mentioned writers wanting some protection with respect to new technologies introduced. Many writers, understandably, may feel afraid that the technology will replace them.

For example, similar to my experience where about 80% of my job description task was handled by technology, the same thing could happen in a writing studio. The technology, ChatGPT or equivalent, could churn out a script. A reduced number of writers could handle the customizing, reducing the need for so many writers. Additionally, the technology would be able to imitate different writing styles. All the technology needs is samples of a certain style (e.g., James Cameron, Nora Ephron) to create something similar. In my mind, this eliminates some of the true essence of having humans behind the creation. I don’t think we can ever predict how somebody will interpret something, given the chance. But a machine… it’s always learning. But can it learn to be creative on its own?

Discovering Joni

A few weeks after my father died, I recall finding a stash of CDs he listened to. I also discovered a typed sheet of song lyrics to Joni Mitchell’s “Both Sides Now.” Though he wasn’t around to ask, I imagined he did that because he wanted to learn them. My true discovery of Joni Mitchell started then. It became a way for me to continue connecting with my father posthumously. And to keep learning about him in his younger years.

Since then I’ve noticed Joni, as I’m fond of calling her, seems to pop up at key moments. For example, this past weekend she gave her first live concert in about twenty years. Coincidentally, my father’s birthday recently passed, father’s day is approaching along with his twentieth deathday. It feels like a sign from Dad to lean on Joni to get me through this month of milestones. Along the way, I may make some new memories crooning along to Joni, or using her music to process the emotions.

I grew up hearing the occasional song by Joni, without really understanding the significance of what I was hearing. Or without realizing who was singing it. Nor did I know she was Canadian until I moved to Canada, almost twenty years ago!

A couple years ago, in the dark days of the pandemic and lock downs, I stumbled across Blue, one of Joni’s finest albums. I thoroughly enjoyed listening to the tracks as a way to pass the time in the long periods of physical, social isolation. Another joy is introducing the songs to the younger people in my life, who weren’t exposed to Joni growing up. One of them screams to hear “My Old Man” from the Blue album. Then she needs me to interpret the meaning of the lyrics.

Even if Joni hadn’t recently given a concert, I feel she would have appeared in another way. A silent hug from Dad. Always so much to continue learning, both about my father and Joni, the amazing singer-songwriter.

Purging Paper

I’m always amazed at how many piles of papers I seem to have laying around. Admittedly, some of them are historic. Papers created during an earlier time. A time when all (or most business) happened on paper. Or sometimes I have papers because I forgot to select an option for electronic delivery. Now I have an option to scan older documents or manage them physically until I can purge.

For example, when I set up my new electricity provider in 2020, I never selected electronic statements only. I’m still puzzled why electronic statements weren’t the default option. Producing and mailing paper statements all comes with a cost easily avoided. Even switching to electronic statements was not as easy as I would have liked. I first had to create an account on the My Account portal, different from the account I created to set up a pre-authorized payment. All unnecessarily complicated, but that’s a topic for a future blog post.

Consequently, I have paper statements. Normally I would’ve shredded them instantly except the service provider only maintains them for two years electronically. Since I need to retain some of them for longer than two years, I now have the option of scanning them and junking up my computer, plus spending time on that. Or spend time organizing them physically.

My other challenge is having everything set up so I can only touch everything once. Otherwise I end up resorting and reshuffling all the papers, moving them from one place to another. It’s all very inefficient. I first start by setting up bags or boxes for SHRED, RECYCLING, and TRASH. I also like to have some file folders, pens, and labels ready to go.

Going through the piles is fairly fast to make a determination. Honestly, by the time I work up the motivation for these types of tasks, some documents are too old to be valuable. This makes some of the work easier. The challenging part, however, is figuring out where to store the papers I’m going to keep. Or if I need to scan them, which can be a time consuming task. Sometimes when I store physical papers (e.g., tax receipts when I used to be a small business owner), I include a destruction date right on the folder or envelope. This makes is easier to purge in the future, but still requires effort in the setup.

Ownership Issues with ChatBot Technology

Recently, Meta (formerly Facebook) decided to release their chatbot coding as an open source. This means that software developers anywhere can take the code and use it for their own purposes. In essence, this will be hugely beneficial for some developers. Typically, code used to power something similar to a ChatGPT-quality chatbot, requires enormous resources to develop. This is something that smaller companies, or individuals, wouldn’t have the resources to produce.

This decision has many benefits and downsides. On the plus side, enabling all kinds of developers to use, play, and experiment with the code for free can enhance innovation. It means new discoveries can be made faster, and sometimes, more efficiently than if they were just being done by one company. The directions in which the code can be explored are infinite and unrestricted. Whereas if only one company, or a handful of them, developed the technology, there would be less options. As consumers, we would have to accept whatever these few companies developed as the “appropriate” uses.

However, this unrestricted freedom can also be one of the biggest downsides. While offering the code as open source allows for independent and innovative development, we can’t always predict in which direction the code will evolve. For example, somebody could use the code to create an underground app for deepfakes. Or to disseminate misinformation and disinformation broadly. Others may use the code to advance medical techniques, evaluate and discover gender bias in job descriptions, or help people craft resumes. The point is, without any regulation, oversight, or accountability, we can’t know the end result. Nor can we anticipate how far the technology will go or how fast. Once the code is out there, it’s probably unfeasible to rein it back in.

It’s hard to know which path is the right one to take. Innovation and discovery is important. It’s something that happens more easily when people don’t face restrictions. Or when regulations and governance are not slowing down progress. Yet, at the same time, this new technology has the potential to be dangerous. With so much freedom, it will be impossible to control it, if it isn’t already too late.

The Ethics of Big Data

Whenever I hear about people using big data to make decisions, I always wonder about the sources. I want to understand more about the data being used and how it was gathered. More importantly, who supplied the data? Equally important is to have insight into who designed the algorithms analyzing all the data. The reason why it matters is because each one of these points, and several others, can contain bias. And in most cases, they probably do.

For example, consider what we understand to be the most common symptoms of a heart attack. My first guesses would be symptoms such as pain in the left arm, feeling faint or dizzy, sweaty, etc. These symptoms form the basis for assessment and triage protocols. However, they’re also based on symptoms typically documented for men, not women. From what I understand, women generally don’t experience the tell-tale pain in the left arm. When all of this data surrounding heart attack patients supports decision making, shouldn’t we be considering the bias built into that? How does the data account for differences in men and women? How do these differences translate into decisions and protocols?

Another important aspect are the ownership issues of the data. I always feel leery about letting websites or apps track my movements. Over the years, the tracking has been steadily improving. For instance, if I shop online and maybe decide not to purchase things in my cart, I almost always get a reminder email (or several) about it. But shouldn’t this be my private decision whether or not I purchase something online?

Somehow, somewhere, someone is aggregating and analyzing this data about my purchasing habits. However, the gathering, analysis, and outcome of this process is a mystery to me. At any given time, the data collected about me and my online habits is out of my control. Even though this data about me, and others, likely increases profits for companies, I’m not seeing these benefits.

To me the collection and ownership aspects of data are overdue for a long discussion about ethics. Is it ethical for companies to collect and use so much data about us? Is it ethical for companies to use data about our online habits as currency to keep us using their products and services? Protecting our personal data will come at a high price, one that has yet to be established.