Gina blinked her eyes hard before staring at the screen again. The headline remained the same: 673 Dead in Catastrophic Double Plane Crash. But was it real?

Since acquiring her new job as part of the “Generative AI Hallucination Detection” team, she couldn’t look at anything in the news or on social media without questioning its origins. Ironically, her co-workers and friends called her the “hallucinator”, even though her job was to detect and dispel them. She had a knack for identifying Gen-AI hallucinations, the real damaging ones. However, sometimes she did feel like she was hallucinating herself. It got tough to manage the ever blurring lines between real, factual, experienced, and just plain made up.

Even though apps and programs abounded to detect hallucinations, deepfakes, and other forms of generated deceit, sometimes the work required a human. Other times it came down to hardcore research, perusing non-digital formats in archives. The thing about generative AI is that it created based on what it was “fed”. Sometimes the AI generated new, and often fictitious results, labeled as “hallucinations.” Once created, hallucinations could also become part of the generative AI “diet”, and could be regurgitated later with previous references to back it up. It could be a perpetuating cycle, hard to detect and even harder to extinguish.

Her attention focused on the headline before scrolling quickly through the article. She jotted down key information points. Airline names. Locations. Plane models. Number of passengers. After, she zoomed in on the photos looking for telltale signs of fabrication. In the 90 seconds it took her to do this, her feed had been tingling madly with frantic work messages about the dramatic headline. In the 120 seconds it took her to listen to the frantic work messages, the headline was viewed over 100,000 times and counting.

Gina cringed hearing her boss’s orders barked through the feed. Due to some recent upgrades, the sender’s real voice read the text-based messages. Another feat of AI-magic. In essence, her boss needed answers and fast. The challenge, for Gina and others on her team, is that finding the facts needed to dispel hallucinations often came from the same source. Unless she could verify details, like the ones she noted, in alternative ways, it could be tricky to disprove.

She started where she always did, by reaching out to human experts. This time, she started with her airline contacts.

The Magic Language of Babies

It may seem that the only language babies know is crying, crying, and more crying. But in many cases, crying is a last resort. A way of letting caretakers know the baby is now desperate for food, a clean diaper, sleep, or a snuggle. Rivaling the decibel level of a lawn mower, crying is a powerful survival mechanism for the baby. The loudness ensures somebody hears the baby.

Although it seems that crying is the main language, babies are really masters of non-verbal communication. Noticing the nuanced signs and signals and figuring out what they mean requires one to pay close attention. For example, the broad recognition of newborn hunger cues. There’s general rustling and mouth openings to start. Sometimes there’s “rooting” when the infant will go searching for a nipple. The movements gradually get more pronounced, going through two levels of escalation before the crying, wailing really, starts.

I thought about these things while reading NeuroTribes: The Legacy of Autism and the Future of Neurodiversity by Steve Silberman. In the book there’s speculation that many of the technological advances we use, including the many forms of communication, were created by people on the spectrum. Based on my understanding, this may be because people with autism prefer communicating in less direct ways for a number of reasons.

While I appreciate and support the many wonderful ways technology connects us, we lose something vital without in-person time. Small details such as how somebody smells, telltale signs of nervousness or boredom, and the focus of somebody’s eyes, don’t exist with many forms of electronic communication. We’re left to rely on tone, for verbal conversations, and even less with textual conversations. In some cases we may only have cues such as capital letters, sparingly used punctuation marks, and emoji to help us figure out the non-verbal nuances. I also feel that too much electronic communication makes people insensitive towards each other.

Even on-camera conversations miss cues. Sometimes the connection is poor resulting in grainy images or lag times. Other times people are not looking directly at the camera because their attention is on another monitor.

We have a lot to learn from each other by connecting. Even if verbal communication doesn’t feel comfortable for some of us, there are lots of other cues to notice for connecting.

Cruising Through Travel with Biometrics

I recently read an article about some cruise lines and airports relying on facial recognition to check in and track passengers. The idea behind this is simple, we all have unique biometric data. This can refer to retina scans, fingerprints, or facial scans, for example. At airports, the idea is to use facial recognition scans to verify passengers without the need to physically check a passport, or identification. Cruise ships are using facial recognition to track passengers as they disembark and come back aboard after excursions. One cruise ship I read about was using the facial recognition softwares to match people to photos taken of them. It could even blur other people in the background depending on which level of permissions they had agreed to.

As usual, I find myself both fascinated and creeped out, at the same time. I can definitely see the benefits and the convenience. I always harbor a secret fear of being photographed by accident, or in the background, of somebody else’s photo. My fear is that the photo gets posted on social media, or shared in places I wouldn’t willingly agree to. Having the option to have my image blurred in the background, or anonymized in some way, automatically seems great. I would love to have stronger digital rights over the photos I take and share. Or equally other people’s photos with me in them.

However, given the high error rate of facial recognition with certain groups of people, I remain skeptical about the accuracy. Also, if you’re like me with a generic-looking face, there could be cases of mistaken identity. I’ve lost count of the number of times people approach me in public to ask me my name, thinking I’m somebody they know. Yet, when I tell them no, they often get strangely passionate about it. At times, I’ve had people persist and ask if I’m related to the person they thought I was.

I’m also dubious about the destruction of this collected data. Though the cruise lines and airports may say one thing, I suspect they might keep our biometric data longer than we would like.

It seems inevitable that these services will come, probably sooner than I would like. The first time I coast through the airport without having to pull out my passport once, it will be fascinating and creepy. All things to think about during my flight.

Acknowledging Artificial Intelligence (AI)

ChatGPT has been around for almost a year. It’s hard to fathom the magnitude of the impact it has had, along with other similar Artificial Intelligence (AI) products. And yet, at the same time, some things that I wish would be impacted, haven’t changed at all. AI is moving rapidly, but as usual, legislation, guidelines, and policies are slow to follow.

I’ve been keeping an eye on the writers’ strike in Hollywood for months. The inclusion of AI in the contract felt like a win and a big impact, at least from my perspective. Essentially, writers wanted protections when it came to AI, among other requests. Understandably, writers felt concerned that AI could replace them, partially or entirely. For example, an AI software could write most of a script and then it would take fewer writers to handle the editing and customizing. I don’t know the exact details of the negotiated deal, but it seems to me that using AI needs to be handled delicately with content creators. There’s a lot of nuance to consider.

I understand why writers would fear AI reducing, or replacing, them. With a few carefully guided prompts and enough training, some chatbots could probably churn out something decent to work with. Though some writers might appreciate having the use of a chatbot to help with generating ideas or a partial script to edit and modify. Coming up with content, especially with high-pressure deadlines, can be draining and stressful. Some weeks I’m so busy, or something unexpected happens (like getting covid), that it’s tempting to use ChatGPT to write something for me. And I only write 400 words (or less) a week! I haven’t used it yet, but that day could come…

Regardless of how writers use AI, the awareness around it and pressure to include it in the contract was important. Too often, new technologies arrive without a lot of governance or guidance on how to manage them. When I was in school, social media was just arriving as something new. We used to ask questions about the impact of all this unmonitored information sharing. Often, the answer was to wait and see when a court case might appear about it to establish some guidelines. However, I think it’s better to be proactive, rather than wait for something to happen.

The Myth of Content Lifecycle Management

Last week I attended a luncheon about content lifecycle management. Basically, this is a method for organizations to manage their content, digital assets, data, information, records, etc. in a centralized location. The idea behind it is sharing resources. Silos broken down. The end result is more effective sharing, analysis, collaboration, etc.

The image used to showcase this new method was an infinity symbol. Essentially, a never-ending loop that continuously goes around and around. The only problem with this idea is that there’s no way for the lifecycle to end. Lifecycle can mean different things and contain different stages. However, I think most people would agree that at its most basic level a lifecycle contains a beginning (birth or creation) and an ending (death or destruction). The never-ending infinity loop didn’t leave a lot of room for end points. This is one of the most challenging aspects of digital information.

When designing and developing these lifecycle management methods and systems, people seem to forget about that all-important end stage. At a certain point, some of the information, or data, etc. is not valuable anymore. It will only “junk” up the system. It could also slow down performance or impact search results. When content is never deleted out of the system, sometimes this old, outdated content comes up in search results, which can be confusing or annoying.

In my experience, not considering the end stage as an inevitable, and natural, part of the lifecycle at the beginning stages, leads to problems later on. For example, sometimes it can be difficult to find and label content retroactively. Determining criteria for which things to keep and which things to purge can also be more difficult when done at a later stage. I’m not sure how companies can accurately label things as “lifecycle” management when some of the most important stages are missing.

Incidentally, I did ask about the model at the luncheon. I wanted to know where the end of the lifecycle occurred. It seemed an appropriate question from The Deletist. However, the answer wasn’t satisfactory. The presenter explained that one could remove content based on the analysis being generated about it, i.e., low performing. Though this would require some thought about how the removal would actually occur.

Needless to say, I won’t be investing in that vendor’s content “lifecycle” management anytime soon.

The Independent Act of Playing Records

Growing up my record player was a prized possession. I can still recall how it looked. It came in its own carrying case. An old-fashioned clasp held the box closed. One that could easily be opened by small fingers. The case had a patchy, light blue pattern on it. Inside the top half contained an image of two kids. They sort of resembled the Chucky doll, but in a benign early 80’s way. The bottom half sported a small white turntable, tonearm, large holes for the speaker, and a switch to change the speed. We only had two options for speed. It looked something like this:

Back in the day, we plugged it in. Even so, it was still light weight and portable. The box was small with a white handle on top for carrying it around. To a young kid this meant independence on many levels. I could move my entertainment system around with me. If the record was small, I could even transport that right with the player. More importantly, I could play records by myself and for myself. I didn’t have to rely on a parent, or a voice-activated device with established controls, to listen to music or stories.

As a young child I spent hours listening to records. My best friend and I listened to music or stories. Sometimes singing or playing along. Other times adjusting the speed and laughing at how funny our familiar songs and stories sounded moving too fast or too slow.

I often marvel at how much music something as tiny as my smartphone, laptop, or even my now broken iPod can contain. A lifetime’s collection of records. Yet, they can’t be shared or experienced in the same way. As a child, there was something very deliberate about opening up my record player, selecting an album from a small amount of options, and setting it on the turntable. The actions were small and automatic, but set the tone for an activity of listening or sharing with a friend, without the need for any grownups to help.