Covid-19 and the Big Data Dilemma

We have all faced the same burden since the pandemic started in March. We have too much information and not enough of it. Living with both scenarios might sound strange. However, that’s the dilemma we’re in.

Tons of data is being generated, collected, and analyzed. Yet, some questions are never answered. Sometimes the answer changes. Other times the answer generates new questions.

Too much data with too many discrepancies and different variables creates challenges. Equally challenging is having an abundance of data, but not the right data. Many questions are still unanswered.

The Challenge with Too Much Information

Each country has their own method for testing and reporting on Covid-19 cases. Within each country, testing/reporting differences sometimes exist between states/provinces. Or variations at local levels.

Additionally, lots of other data is being generated or collected on issues related to the pandemic. One example I found interesting was a decreased number of babies being admitted to the NICU, in some places.

A lot of data is available, but it can’t be analyzed accurately. For example, several countries have started reopening their school systems. However, each country used a different method and reported on various elements. Some countries did more testing. Others did a more phased approach. Others are using a mix of online and in-person education.

Countries looking to reopen schools must review the wide range of options available. Then try to create a policy and propose a plan to keep children safe. But couldn’t this be simpler if some common elements existed between all the data sets? Or if some of the collection methods were standardized?

Basing decisions on available data comes with risks, especially when there are inconsistencies in the data creation process.


The Challenge with Too Little Information

We still don’t have enough information yet to answer important questions. For example, how long are people immune? What percentage of the population is asymptomatic? Do we still need to wash our groceries? Or transfer takeout food to different containers? Why can’t we do a better job of tracking and distributing PPE to people who really need it?

The challenge of having so much data available is making sure the right data is being collected. Otherwise, we’re all just trying to make sense from all the “noise.” Too much information and not enough of it is useful to give us the answers we need.

The Problem with Doing Business by Email

I’m amazed at how relevant email is with so many alternatives available. Doing business by email is still the status quo. This is true even when exchanging highly personal and sensitive information. Better and safer options are available. Why don’t we use them?

Email took root in the workplace over 30+ years ago. It quickly became a default way to do business. Yet decades later, email is still the “go-to” solution. This has become more apparent with the pandemic.

In some cases, the suddenness of the pandemic required employees to work from home overnight. Employees relied heavily on email when proper information infrastructures weren’t in place. Even when email wasn’t a good fit. For example, people use email attachments to work collaboratively. Or using email to send sensitive information.

Working Collaboratively with Email

Working collaboratively on documents through email attachments has never been a good solution. Multiple people editing an attachment means multiple versions will be created. Changes are made without seeing what others modified, resulting in duplication of effort. Even worse, someone must compile all the changes. Nobody wants to do all that tedious copying and pasting.

Consequently, changes may be missed, or worked on unnecessarily. The process is cumbersome and more prone to errors.

Assigning an editing order is a quick way to resolve some of these challenges. Ultimately, the best solution is investing in software that allows editing the document from one place. This eliminates the need for compiling changes. Also, people can see edits in real time.

Alternatives to Email for Sensitive Information

I can’t understand why so many businesses still receive personal and sensitive information through email. It’s 2020! The alternatives offered are even worse. Usually they include things like faxing or delivering paper copies. So many great high tech options exist. I’m astounded that email is still the first option.

Email is a poor way to share personal or sensitive information. The method may seem secure because the sender can control who is receiving it. Email seems safe and private. However, once sent, there’s no way to track where the information ends up.

The attachment could be forwarded. Or downloaded and saved on a public computer. Or saved in multiple locations or devices. Tracking all these “copies” becomes problematic.

Investing in a portal is one solution. This allows clients to seamlessly upload sensitive documents directly. No messy attachments. No unnecessary copies made.

Social Media: Fanning the Flames

When social media started, it was different from how it is now. Social media was a new way for people to communicate. To share ideas, disagree with one another, and have open discussions. Facebook, for example, was for people to connect with one another. People could easily maintain contact, get updates, and make “friends”.

As more people used social media, the scope of it expanded. People formed groups, advertised business, influenced others, shared photos, etc. Some people presented an ideal portrait of their lives, giving rise to FOMO, “Fear of Missing Out.”

What started out as a platform for connecting and sharing different perspectives quickly devolved into something else entirely. Instead, social media has become a place for people to maintain or promote narrow viewpoints. To gain support for them or use them to influence others.

Social media companies compete for an important commodity, our attention. In order to keep us engrossed and addicted to our accounts, a number of tactics have been used.

One strategy is to use algorithms to show us content we’re sure to like. The algorithm might be based on our past selections. Or by displaying content that others “like” us may have viewed.

I know lots of people who love this personalization. Why would anyone want to spend their time sifting through irrelevant content? Or posts not aligned with their interests?

However, the downside of this system is people are never exposed to anything different. Or content that is contrary, challenging, thought provoking, etc. Whatever you like, social media companies make sure you get more of the same.

Algorithms push people into self-reenforcing content. It’s easy for people to always see the same types of things. As opposed to being exposed to a wide variety of opinions and viewpoints. This is problematic.

I often read articles about misinformation, disinformation, and hateful content promoted through social media. The “solution” is for social media companies to discover and eliminate this type of content, by using algorithms. However, it doesn’t really solve the problem. Instead, it pushes these viewpoints to other platforms. Places where they can grow and gain mass followings under the radar.

Perhaps instead of pushing out “banned” content, the algorithms could be readjusted to offer a diverse range of viewpoints instead. Give users something to consider rather than spoonfeeding them more of the same.

Ferocity

I’ve loved ladybugs my whole life. Cheerful red bugs with distinctive black spots. Their folded wings are an engineering feat of nature.

Until recently, I thought of them as “friendly” bugs. Pretty to look at and not interested in biting me. And then I got an aphid infestation in my balcony garden. Two years in a row! If you’ve never had or seen aphids, they’re like plant lice. Completely disgusting.

Ladybugs, I discovered, are natural enemies of aphids. Last year I couldn’t find any ladybugs for sale. So I filled a spray bottle with a mild soapy solution. I diligently sprayed the infected plants twice a day with the sudsy water. I convinced myself it was working. It wasn’t. Those horrid aphids were everywhere.

This summer I decided to make the balcony garden really nice because of the covid-19 staying-in-place requirements. Once again, the aphids appeared. Just a few at first. I immediately concocted the soapy solution and started spraying. And spraying. And spraying.

The aphids kept spreading. First infesting my zinnias before reaching the johnny-jump-ups and eventually traveling across the balcony to my salad mixes. My salad mixes were so lousy with aphids I couldn’t eat them.

This year I purchased 1000 ladybugs for $30 from a local nursery. One thousand seemed like a lot, but given the flight risk, I figured it was better to have extra.

I had read some myths about using ladybugs to combat aphids. The main reason ladybugs won’t work is because they fly away. However, I was counting on a sacred law of nature to disprove this theory. No animal will turn down free lunch. My plants offered the ladybugs a 24/7 buffet of tasty, plump, and abundant aphids.

We released the first batch over the weekend. I was immediately impressed by their fierceness. Over in the johnny-jump-ups I watched a ladybug gobble up an aphid in her pinchers. We cheered her on! The next morning, we checked on the team. Within minutes we saw the cheerful, red spotted insects crawling diligently over the infested areas.

Every once in a while I find myself astounded and amazed by something new. Watching these tiny, but mighty troops eat aphids is really incredible.

Sometimes we’re so busy in our lives, running around, worrying about big things, we forget about the power of small things. I feel humbled by, and extremely grateful for, these ferocious, carnivorous predators.

Social Media Mixed Messaging

Figuring out the real information from disinformation/ misinformation can be confusing on any social media platform. Social media companies, e.g., Facebook, Twitter, also struggle to make this determination.

Facebook claims to use algorithms to scan through millions of posts. However, with over 2 billion users, missing even 1% of questionable content is a big deal. It’s also unknown how the algorithm is programmed. Who decides what is questionable?

What about the difference between expressing an opinion and spreading “hate speech”? Some companies claim to ban “hate speech.” Or posts that are perceived to incite violence. Who makes that determination? And how is it enforced? Should we allow others to express their viewpoints even when we don’t agree with them? When is the line crossed between an unpopular opinion and something hateful? It may be obvious in some cases, but language is nuanced.

In library school we learned about collection maintenance. We were taught a good library has at least one thing you find offensive. This doesn’t mean the collection has bad content. It means the collection offers diverse and varied perspectives. Even if you don’t agree with all of them.

Could a library have something like “hate speech” in the collection? Well, they might if it was published from an authentic source and contained information. In other words, if it was verifiable information, as opposed to disinformation.

One big difference between libraries and social media platforms is that the former curates published content. If you find something objectionable in the library catalog, it was still vetted by professionals. Meaning, it was real information, not disinformation/misinformation.

Social media, by contrast, largely acts like a distributer instead of a curator. They’re available for anyone to post his/her thoughts, verified or not. However, social media companies have been forced to rethink how they operate because of recent criticisms.

The first challenge for social media is to figure out what role they play with content. Are they going to be distributers, publishers, or curators? Once that is established, they need to determine what content should be allowed. And finally, how to enforce that, keeping in mind that social media platforms operate across cultures, languages, and countries.

Social media is an influential and popular mode of communication. Social media companies can no longer afford to be mere distributors. Or send mixed messages about the content that is allowed on their platforms.

Google: Not Built for Deletion

Last week Google announced that it would delete users’ histories on new accounts, by default. For existing accounts, the options have always been available. Google didn’t want to change the default for existing users. Meaning, if you already have an account, you’ll have to make the changes yourself.

Google’s support pages provide instructions on how to Delete Your Activity. You can also learn how to set up scheduled deletions.

Deletion by default is a big deal. I never liked the idea of data being collected just because it can be. Or as a way to “improve customer experience.” This is really just something companies say when they are giving us something for free in exchange for collecting our data. The companies then monetize this data, usually with targeted advertising.

Personally, I find this a bit creepy. However, I know lots of people that appreciate the focused advertising, catering to their needs and likes.

Activity controls in your Google account are available for: Web and App Activity, Location History, and YouTube History. Choosing to save your activities provides… “for better personalization across Google. Turn on or pause these settings at any time.

Being The Deletist, I felt compelled to investigate the new deletion options. However, I had paused my activities a long time ago. So I didn’t really have much to delete. Also, as an existing customer, I had already set my auto-deletion to 3 months for everything. I probably did this right when the option became available.

Even though Google is now trying to be more transparent and make it easier for us to have some kind of “control” over our data, it’s still tricky to navigate. Many options are buried several layers deep in the settings. And a lot of people probably never even check their settings. If they do, they likely don’t have time to go through all the choices. It can be overwhelming at times.

I suppose this is why Google decided to do deletion by default. It’s not perfect, mostly because Google still retains some data. Even with anonymizing the data, as Google claims to do, I’m still uncomfortable with the idea of having data collected about me and my habits. But if I want to be part of society, that’s the price to pay.