Computers Don’t Do Nuance

I’ve noticed that as a species, we seem to give a lot of credit to computers and technology for all of their marvelous capabilities.  Often we do this without realizing that humans are responsible for all the design, development, and programming.

In reality, computers really only do what we tell them to do.  And even more importantly, computers don’t do nuance.  Basically, a computers can do anything, as long as it’s able to be translated into the computer’s language.  Essentially, everything must be distilled down to a binary decision: Yes/No, True/False, 0/1, A/B, black/white, etc.

I consider this factor often when doing backend system design for clients.  In order for technology to take over for a human (e.g. by automating a process), all of the decision points must be simplified to computer language.  This can be tricky when a particular process contains too many factors, or “gray areas”, to result in a binary decision point.  For example, consider a college application.  The computer would be able to make decisions based on GPA and test scores, but evaluating the candidate based on other, more nuanced factors, such as extracurricular activities or the essay portion, must be handled by a human.

Another area where computers lag behind humans is in facial recognition.  Overall humans are able to identify the same person in photos or video more reliably and accurately than computers can.  Although computers have improved in this area and can accomplish some pretty amazing things in terms of auto-classifying photos, they still have a long way to go.  The other day a friend of mine was showing me some of his childhood pictures.  I’ve only known him for less than a year, but I was immediately able to pick him out of his primary school class photos.  I doubt a computer would be able to do that yet.

A lot of advances have been made in AI (Artificial Intelligence) lately that may one day allow computers to think outside of their programming and to better anticipate our needs.  However, based on how poorly the prediction and auto correct works with my text messages, I’d say they still have a long way to go.  So while we’re eager to give computers (and technology) a lot of credit for all the amazing things they can, and will, do, there’s still a place for human brains.

 

Aural Discrimination

During the pre-op for my LASIK surgery, one of my only questions was when would I be getting the valium.  The technician laughed and told me that nobody acted brave for this surgery.  He then said if it was a choice between his ears and his eyes, it was a no brainer.  Eyes for sure.

I paused for a moment.  No music?  No ocean waves or sounds of laughter? Of course I love having both senses, but I would miss sound too much if something happened to my ears.

I’m convinced that ears are one of the most neglected organs, right after things like skin, gall bladders, and the appendix.  Did you ever notice how rapidly the technology for cameras advanced in mobile devices?  When I got my iPad in 2013 I was amazed at the clarity and quality of the pictures.  The functionality was more limited than a real digital camera, but it still took terrific photos.

However, speakers, sound quality, and default noises have all lagged miserably behind the more ocular-centric features.  Each time I upgrade my smartphone, the ringtones that come with the phone get worse.  The first few phones I had, even the “dumb” versions, all offered ringtones that were soothing and soft-toned.  Everything now is shrill and tinny sounding.  Of course, that could also be the inferior speakers.

In the same way that mobile devices offer us the capability to take high quality, digital photos, they could also offer us better sounding ringtones, notifications, and alerts.  By default most noises feel jarring and disruptive.  I suppose that’s because they’re designed to alert us of every new tiny update of information, no matter how small and inconsequential it might be. Even the vibrate option is loud and rattling.

Wouldn’t it be nice if the ringtone was something soothing and calming?  I fondly remember my two favorite ringtones, a frog croaking and the sound of waves crashing with sea gull noises. Most people who heard them thought they were funny or weren’t bothered.  I could download these special ringtones onto my new phone, but I feel like the manufacturers could do a better job with the default options.

It would also be nice if the quality of the speakers improved to something comparable to the level of the camera.  Why is this always one of the last things to be considered for improvement?

Blind Faith

As a small child I always dreamed of one day having “hawk” vision, razor sharp up close and able to see details from a far distance.  From a young age I had “coke” bottle lenses, the kind that were so thick they slightly distorted the shape of my head around my eyes.  I switched to contact lenses and wore them for the next 25+ years, continually amazed at how one tiny disc of plastic instantly granted me clear, crisp vision.

After years of thinking about LASIK, I finally worked up the nerve to get an assessment.  As soon as I found out I was eligible, I booked my appointment.  I had grown accustomed to living with corrected vision, where I could maintain a choice about how I wanted to see the world.  Sharp, clear, and in focus.  Or remove my contacts and reduce my world into something soft and hazy with familiar shapes.

Throughout the whole process I remained committed to my decision to have the surgery done coupled with a heavy amount of faith in the process, the technology, the equipment, the doctor and his staff, and most of all, in my decision.  Were the lasers really sophisticated enough to recalibrate for any micro-head movements I might make?  Were the pre-surgery measurements really accurate enough to correct my vision?

I think about the enormous amount of faith I put in myself.  Was this a good idea when my vision could be corrected so well with glasses and/or contact lenses?  I pushed the thought out of my head as the procedure started.  My eyeballs were anesthetized, one open and one closed.  I followed the nurse’s instructions to stare at the red light.  Something was placed over my eye and my vision went dark for a few moments, just like the nurse said.  Then it came back, just like she said.  What if… what if… what if…

I again pushed the thoughts out as I moved to the next room for the vision correcting lasers.   Fortunately, my biggest fear of being blinded was relieved within minutes of finishing.  The results are somewhat instantaneous. I almost started crying when I sat up from the operating table and could see the room.  I was instructed to keep my eyes closed but I periodically cracked open my eyelids just enough to confirm I wasn’t blind before shutting them again.

Pretty incredible.

Watching the Election: In RT

Last week I watched the US Elections at a bar with a few friends.  The bar supplied me with Dark and Stormies* throughout the long evening.  It’s been my official election night cocktail since 2004.

To pass the time between the various states poll closing times, we listened to typical election banter from the CNN newscasters about vote tallies in states and counties, how current results compared with 2012 results, etc.  Initially, I enjoyed the real-time, instant reporting. I appreciated watching the newscasters seamlessly toggle between the 2016 and 2012 results as tallies were updated.  I loved how the newscasters could move around the US map and easily zoom in on a particular area to give us details about a specific county.

And then, it just went on and on and on.  Around 10pm, after watching the election for a couple of hours, I felt fatigued and bored with the coverage.  Every few minutes we were inundated with a loud blaring noise from CNN announcing a NEW Key Race Alert!!  Each time this happened I felt a surge of stress and adrenaline.  I quickly grew bored of the discussions about who had votes where, how many each needed to win, and the comparisons with the 2012 election.

Every discussion was the same, except with the state and/or county names changed.  I wanted to hear something new or different, instead of the same formulaic points made over and over and over….Since everything was reported in RT**, we got “updates” from states with only a small percentage of votes counted.  I would’ve preferred to wait until a state had counted at least 50% of the votes to get an update.  I felt overloaded and burdened with too much information.

At some point during the night I started to think about how we used to watch the elections before it was broadcast in RT with incessant updates every time a vote was counted.  What did we use to do for those hours in between the polls closing when we had to wait for results?

As a side note, with all the new technological advances and RT reporting, I couldn’t help but wonder why it took New Hampshire so long to count their votes.  One would think that such a small state would be able to produce results faster, even if they had to count paper ballots by hand.

 

*ginger beer and dark rum

**Real Time

Correcting Auto Correct

I typically use swype writing to create messages on my smarphone.  Swype writing [link here] allows me to enter words by moving my finger continuously across the keyboard over the appropriate letters.  For example, to type Monday, I start with my finger on the “M”, slide up to the “o”, down to the “n”, over to the “d” and “a” before finishing on “y”.  The word is created without lifting up my finger to input each letter individually.

Often the smartphone figures out what the word is and suggests a term before I’ve finished “typing” it based on predictive text.  After completing each word, the predictive text feature also suggests words to follow based on my usage patterns.  For example, if I type “What”, I am automatically offered options for the next word such as “you, time, is, are, about, time are”.  Sometimes I can complete the message by selecting the offered options.  Most of the time this works well.

Sometimes, however, the smartphone tries to be too helpful.  I turned off the auto-replace feature on my smartphone because it kept inserting email addresses into my messages based on a few letters, among some other bizarre things happening.  It was so annoying!  At times, the prediction and auto-replace are so wrong, and not based on my language patterns, I can’t help but think some developer had a good laugh while doing the programming.  Here are some examples.

“And”, a common and frequently used word, was almost always auto-replaced as “abs”.  After turning off auto-replaced, I’m now offered “abdominals” as an option when I insert “and”.  I’m not a fitness trainer.  It’s rare for me to use “abs” or “abdominals” in a sentence.  When would these words ever be more likely, or common, than “and”?

About 85% of the time “also” was automatically replaced with “Akzo”.  Huh?  What is Akzo?  Also puzzling is why “also”, another common term, would be replaced with an obscure proper noun.  It just doesn’t make sense.

Another strange correction is “with” frequently turning into “wuthering”.  I was a literature major in university and I’m 100% sure that I’ve never used this word in a sentence, unless I was referring to the book, Wuthering Heights.

I understand some training and customization is necessary, but some options are so off that I wonder how they even got programmed in.  It’s made proofreading, even for trivial messages, essential.

 

In the Mayor’s Chambers

“Honey, what should we print for dinner?”

Snapper opened the cupboard and perused the options while waiting for a reply.  A row of jars filled with muted colored powders and pellets lined the shelves.  He picked up a jar containing a moss colored, chalky substance simply labeled “greens“.

Snapper’s brain hurt from analyzing emoji all day.  Last week Mayor Peebles had been charged with having sexual relations with several of the young summer interns.  Snapper had the unenviable task of sifting through thousands of text messages, emails, and social media conversations between the Mayor and his support staff.  He felt a small surge of pride thinking about how clever he felt after discovering the hidden meaning in messages containing a panda head, a rainbow, and an eggplant.  Totally scandalous!

Having graduated with a psychology degree, Snapper never imagined that he would make money as an interpretive emoji specialist.  While getting his degree he had nearly failed a course in Verbal Communication Skills.  Only in the last two weeks of the term had he finally learned to ask his questions out loud.  Normally he submitted his questions electronically during lecture where they appeared on the class feed projected near the professor.

Analyzing emoji was perfect for Snapper, as that had been his primary mode of communication for over two decades.  Real words were reserved for school work, or maybe if something extreme happened.  Otherwise Snapper felt emoji was sufficient to express what he was experiencing 90% of the time.

Printed and spoken words just got in the way, as far as Snapper was concerned, especially when he had to use punctuation.  Even now, he was still waiting for a reply.  He should’ve just sent a message to his partner lounging in the next room.

His forearm started to tingle.  He looked at the emoji projected onto his SmartScreen:

Chicken LegFriesGreensWeary Face

 

 

 

 

Snapper selected an array of jars: solids, chix, greens, podado, and fats.  He carefully poured solids into three of the printer’s chambers before topping each one off with the appropriate powder.  He set the chix chamber to MEAT TEXTURE and the shape to DRUMSTICK. The weary face emoji could only mean one thing, a bad day at the office.  The kind of problem a supplemental fat infusion could solve.  He added two handfuls of fats pellets into the auxiliary compartment.

He pressed start and poured himself a cocktail while dinner printed.