Current Events, Politics, Pop Culture

Actually, It’s Called Zuckerberg’s Monster


Prometheus and Dr. Frankenstein walk into a bar…


In ancient Greek mythology, Prometheus defied the gods by stealing fire and giving it to humanity. Zeus punished him by having him tied up to be feasted on by vultures for eternity, only for his wounds to heal each time and the pecking and gnawing to begin all over again.

If you want to know where so many get the idea of a wrathful, white bearded God…well it all goes back to Zeus. He was a real prick.

Beyond Greek myth, the story of Prometheus became the embodiment of the innovative genius and the dangers of bestowing technology on such flawed creatures. Prometheus became the fictional avatar of the radical innovator boldly pushing humanity forward with new tools, new technology and new ways of thinking. His name also became cultural shorthand for the dangers of “playing God” and the moral and ethical implications of scientific achievement. Mary Shelley used the subtitle The Modern Prometheus in her iconic horror novel Frankenstein, which was about the perils and ethical follies of reckless scientific pursuit and the responsibility of creators for the damage their creations cause. Mary Shelley’s Monster isn’t the lumbering mute you remember from the Boris Karloff Universal films. He’s intelligent. He learns to speak and read by observing people. Humanity, treat him with fear and hostility. The Monster becomes bitter, angry and malicious in turn. The Monster in Frankenstein isn’t inherently malevolent, he merely lacks guidance and only sees the bad side of humanity.

If you’re wondering why I’m waxing macabre about Prometheus being pecked upon by hungry birds and Dr. Frankenstein being an egomaniacal narcissist who unleashes a creation into the world he refuses to take responsibility for well…I watched The Social Dilemma recently. Let’s just say that being pecked nearly to death in an endless cycle of misery and torment is basically a dead-on definition of using Twitter. Just with more cat pictures.

Oh and the punchline for the the joke setup above is Prometheus burns down the bar and Dr. Frankenstein skips out on the tab.

Maybe it’s not “ha ha” funny.

Unintended Consequences

For every new technological innovation that humankind has discovered since the dawn of time, there has always been a period of…let’s call them “growing pains”. Great innovations carry hidden consequences that don’t always make themselves apparent immediately.

When charting the tools that helped humanity climb to the status of dominant species, fire was a big one. Learning how to harness and put it to work allowed our ancestors to ward off predators, cook the parasites and pathogens out of the food they ate, boil water to make it safer to drink, provided light in the darkness and allowed for the shaping of metal and formation of tools, just to name a few.

We owe the entirety of modern existence to fire and our ability to put it to work. And yet, that wonderful discovery also carries a hidden curse. Any tool, in the wrong hands, can be dangerous. After all, a blacksmith and an arsonist both use the same basic tool.

Let’s run the tape ahead to a more recent example.

When the first automobiles hit the streets around the turn of the 1900’s, they were a great innovation. For better and for worse, they changed the world. Horses were gradually put out of work (at least, most of them). Entire city layouts were built to accommodate them. Agriculture, industry, commerce, work and leisure all changed dramatically due to the adoption of the automobile.

Of course, saying they “hit the streets” is itself misleading as streets as we know them weren’t really a thing yet. Picture more the dusty thoroughfare of a frontier town and a Model T mowing down the cast of Little House on the Prairie and you’re getting the picture. There were no rules for driving cars, owning them nor was there infrastructure to support them. There was no highway system to make travelling safer.

According to DetroitNews.com:

In the first decade of the 20th century there were no stop signs, warning signs, traffic lights, traffic cops, driver’s education, lane lines, street lighting, brake lights, driver’s licenses or posted speed limits. Our current method of making a left turn was not known, and drinking-and-driving was not considered a serious crime.

https://www.detroitnews.com/story/news/local/michigan-history/2015/04/26/auto-traffic-history-detroit/26312107/

City streets were chaotic places where people would get run over by unlicensed drivers and for roughly the first 30 years of it’s existence, the car was a menace to public society.

Of course, we adapted. After auto manufacturers got together, bought up all the major streetcar systems in big cities and shut them all down, it forced people to buy cars and regulation and infrastructure to manage them became a priority.

(Sidenote: Cars were considered purely a rich person’s luxury at the time. Picture all airlines being dismantled in order to force everyone to learn to fly their own individual planes. That’s basically what happened.)

If that all sounds eerily like the plot of Who Framed Roger Rabbit?, it’s because that’s exactly where the makers of the film got the idea.

They got the idea of Christopher Lloyd with cartoon knife eyes directly from my nightmares.

https://en.wikipedia.org/wiki/General_Motors_streetcar_conspiracy

When it comes to cars, we’ve made a lot of advancements in curbing the human potential for error since those initial models rolled of the assembly line around the turn of the 20th century. Drivers must be licensed, and they are trained in common rules of the road everybody is expected to follow. Conditions that impair judgement or reaction time such as driving under the influence or – more recently – distracted driving have gradually been addressed as new laws and regulations have been rolled out to curb them. Drivers can lose the right to legally operate vehicles, if they don’t obey the rules.

Are cars perfectly safe? Nope! As long as they are operated by fallible human beings capable of using them irresponsibly, they can still cause massive harm. Knowing where the dangers are is at least a step toward addressing them.

That case study is kind of true for every tool humanity creates. Unexpected consequences don’t always announce themselves while strolling through the door and putting their feet up on the kitchen table.

Technology that doesn’t manage it’s public reputation can easily find itself on the outs. Nuclear energy, for example, could be a solution to phasing out fossil fuels but a handful of examples where human error and fault equipment led to the technology being treated irresponsibly like Chernobyl and Three Mile Island or historically unpredictable disasters like the 9.0 earthquake and tsunami hitting Fukushima damaged it perceptions.

And we were all left the image of nuclear energy being used as a weapon with the legacy of the atom bomb looming large in history, without the beneficial side to counterbalance.

Living in a Skinner Box

In 2007, I set up a Facebook account. I was a pretty early adopter and a frequent user. I was in my first year of broadcast studies in college and a lot of my peers were setting up accounts. In a lot of ways, I was in the early target market of what Facebook started out to be – a college student based internet social space, where you could peep on which of your peers were single. (Not that it ever helped…)

I stopped posting regularly on Facebook around 2018 about the time the news came out regarding how instrumental it had been in weaponizing reactionary hate movements during GamerGate and mainstreaming conspiracy theories.

I also have an interest in gaming, and around the same time an emerging problem of video games weaponizing addictive behavior to keep players playing (and spending) had become a hot topic. I couldn’t help but notice that social media networks operated on similar Skinner Box behavioral manipulation mechanisms to keep users engaging with the platform as slot machines do.

Extra Credits did a great video (below) outlining The Skinner Box and how operant conditioning manipulates us. It’s filtered through a gaming centered lens, but the concepts apply equally if you substitute points or leveling up for notifications, likes and shares.

Full disclosure, I still use Twitter more than I should. I tell myself it’s out of necessity to remain informed. Hell, once I finish writing this I will have to post it to social media in order for anyone to see it.

Again, social media is an incredibly useful tool, but it also has gaps and dangerous blind spots that must be addressed.

So let me be clear, I’m not casting stones here. I know how addicting that compulsion to stay “in the loop” is. I have a twitter tab open right now, that is making the writing of this piece a much longer process than it could be. I know the temptation to just pop in for a second and see what’s going on all too well. I know how easy it is to lose hours at a time just scrolling. I know the FOMO (fear of missing out) that can result in unplugging and walking away from it.

So, I don’t want anyone reading this to think I’m somehow thinking I’m above getting caught up in the same things or casting judgement on others. I did the Facebook thing for a long time, I still have to actively fight the urge to give into my own addictive tendencies and dive back in. It took a concerted effort and learning about the ways social media manipulates our brains and how it can fundamentally change how we feel about ourselves and the world to pull me back from it.

So…Let’s Talk About The Social Dilemma

The Social Dilemma is a documentary, released on Netflix, in which a number of ex-employees and creators of various social media platforms talk about the hidden dangers of social media and the fact that those in charge of these systems seem completely oblivious (and in Mark Zuckerberg’s case…bordering on being in complete denial) as to the nature of those dangers.

The film also dramatizes the real world consequences of social media addiction by depicting a fictionalized story of a family where various family members struggle with different social media problems and dramatize the dangers in some interesting ways.

The younger sister deals with the problem of self esteem issues arising from the constant comparisons to others, the drive for social approbation, and the focus on being photogenic. This is an incredibly important aspect of social media that often doesn’t get talked about. When all you see of other people’s lives is the highlight reel that gets posted to social media, it can be incredibly isolating and demoralizing. To quote broadcaster Tom Campbell, it becomes a case of “comparing your behind the scenes to other people’s greatest hits” and naturally leaves a person with a lopsided view of reality.

When all you see on social media is happy smiling faces, vacation photos, engagement announcements, etc – it can leave you feeling like everyone else has everything going well for them and you’re the odd one out.

Further reading: https://www.healthline.com/health-news/social-media-use-increases-depression-and-loneliness#Does-social-media-cause-depression?

Meanwhile, the brother deals with a gradual radicalization into conspiratorial hate groups, with the algorithm feeding him more and more radical content and gradually isolating him. The film depicts that indoctrination process of validation, addiction and eventually isolation from his friends, family and reality.

This aspect of the film goes into the erosion of truth and the lack of evaluation of information that social media platforms still haven’t manage to get ahead of. Conspiratorial thinking all comes out of the human brain’s tendency to be manipulated by novelty bias, hostile attribution bias, pattern seeking bias (aka Apophenia) and proportionality bias. All of which form the basis for conspiracy theories…but that’s a topic worthy of a deeper dive on another day.

The Social Dilemma sounds the alarm bell very loudly on social media being an unchecked force that is having very real consequences on society. The film does lean pretty hard on the fearmongering about new technology. In many cases advocating complete abstinence, which just isn’t possible in a world that has seen it’s power. However in gathering interviews with people who worked inside of these companies and hearing their concerns, the tone isn’t entirely without merit.

Social media is still very much in the “people getting run over in the street” stage of the car metaphor I used above. There aren’t any rules of the road on social media and the potential to threaten all of us is there, if we don’t get it under control. While I think social media does have benefits as a tool, it behooves us to take seriously the detriments and address them.

The cycle we’re living in is not one completely unknown to us. New thing is created, new thing causes unforeseen consequences, business cretins ignore problems, problems get too big and costly to ignore (general rule is every public safety law or regulation on the books has at least one threat of a lawsuit behind it) so safeguards are eventually introduced over the kicking and screaming objections of the absurdly wealthy business cretins. Tale as old as time…

Just for example: The car cigarette lighter predates the seatbelt by about 50 years. Just to illustrate where the priorities were at the time.

The illusion of neutrality

I think I’ve made pretty clear by my fire and cars analogy above that ultimately, I am pro tech. I recognize the value that technology brings in terms of making our lives better and there is no use scaremongering about it. There really is nothing new under the sun when it comes to the cycle of new innovations being feared and mistrusted initially, but eventually becoming accepted and normalized. The key to that process is identifying the consequences and blunting them.

Thing is, I’m also pro-civic responsibility and that’s the crossroads I think we’re at in terms of our societal relationship with social media and on a much deeper level AI systems. We’ve seen the benefits of social media, but the dangers have become too big to ignore. We’ve seen how the psychological manipulation employed by social media platforms to retain engagement, can also be weaponized if they aren’t monitored and regulated with human oversight.

These are systems overseen by amoral, libertarian tech bros who believe the consequences of their creations aren’t their responsibility to curtail. The same attitudes that tell them that concepts like systemic inequality and privilege don’t exist, also feed into the platforms they create. People like Mark Zuckerberg and Jack Dorsey who manage to morally wash their hands of the broken relationships, weaponized bigotry and dumbing down of the discourse their inventions have enabled; need to be supervised in some capacity. If that means governments of the world stepping in to regulate and remove the absentee landlords from the driver’s seat, then so be it.

In the article Rise of the Racist Robots: How AI Is Learning All Of Our Worst Impulses, several instances are examined where unsupervised AI algorithms ended up going horribly wrong. We’re not just talking about darkly comic examples like Tay, the AI Microsoft chatbot that within days on Twitter became Antisemetic. Unsupervised AI systems are also capable of picking up on flaws in human nature and emulating them.

For example, Amazon’s in house hiring AI was scrapped after it was discovered that the AI was sexist. In this case, the AI looked at Amazon’s current and past hiring records, noticed they were men, and internalized the inequality into it hiring recommendation guidelines and started filtering out female applicants, thereby doubling down on inequality in the hiring process.

The problem isn’t AI systems (which are just doing what we tell them to do), the problem is the systems have no guidance to account for human biases and weaknesses, compounded by an illusion of neutrality that give these systems more cover for those biases.

If you create an AI algorithm designed to predict criminal behavior, it is most likely going to be weighted against people of colour. The AI won’t know the racist history of the war on drugs and the Nixon administration’s deliberate use of Cannibus prohibition as a political tool in the 60’s to attack hippies and Black people.

Harper’s magazine (in an excellent piece illustrating the futility of the war on drugs) included a chilling quote from Richard Nixon’s chief domestic advisor and Watergate co-conspirator John Ehrlichman finally spilling the beans on the motivation behind the war on drugs.

At the time, I was writing a book about the politics of drug prohibition. I started to ask Ehrlichman a series of earnest, wonky questions that he impatiently waved away. “You want to know what this was really all about?” he asked with the bluntness of a man who, after public disgrace and a stretch in federal prison, had little left to protect. “The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”

Now imagine you’ve designed an AI system to predict criminal behavior that uses previous law enforcement data and that isn’t aware of any of that underlying political or racial bias.

Much like Dr. Frankenstein’s Monster who internalizes the hate and distrust of society toward him because he doesn’t have anyone to guide him, AI systems are designed to be learning things. Without guidance, the things they are learning might not be beneficial.

If Facebook’s AI prioritizes time spent on the platform and “user engagement metrics” (likes and shares) then it becomes subject to taking shortcuts by using behavior conditioning systems like the Skinner Box and prioritizing misinformation and divisive content because that’s what gets circulated.

The one thing that The Social Dilemma makes clear is that the social media industry’s ability to self regulate is failing. Misinformation and sleazy operant conditioning mechanisms are still rampant. The drivers are asleep at the wheel. It makes a solid case for why humanities studies should also be requirements in the STEM fields. Without human understanding to temper technological advancement, we end up at the mercy of our own flaws and biases.

While regulation (with a heavy focus on transparency in AI programming and guidance) would only be one step in bringing the monster under control, the other half of it is knowledge. Right now, that is where pushing back against the corrosive elements of social media is important. In the absence of safeguards and oversight, we need to be the safeguards. The only way to not fall victim to these kinds of forces is to educate ourselves on the very human frailties that we are all susceptible to.

A time I avoided getting scammed, and a time I didn’t

I want to leave you today with a couple of stories that illustrate how important knowledge is in protecting oneself.

Several years back, I almost got taken in by a scam.

While working my old job at a bookstore/print shop about 6 or 7 years ago, I got a call about some photocopier toner claiming to be from our distributer. It’s apparently a scheme that pops up every few years where scammers will call a business multiple times over the course of months gathering information about photocopiers. So maybe they call entry level employees (in our case a reception desk which was always high turnover) and ask what company you’re with – posing as a competitor hoping to make a counter offer. Then they call a few weeks later posing as the company itself, asking for confirmation of equipment information (model, serial numbers, etc). By the time they’ve called several times, they are able to accurately list off enough information that I – the person who dealt with the copiers and invoicing – didn’t think much of it. They had all the correct information.

So I get a call in the middle of a busy day. “Hi I’m [scammers bullshit name] calling from [bullshit fake company], we handle distribution of toner for [real copier company we use] we understand you have a [copier model number] with the serial number [real serial number].” They tell me that the cost of toner went up recently and they didn’t let us know about it and they will send us a few cases at the old price as a make good before the new price kicks in.

Now, our internal policies stopped the scam right there, because I needed an official invoice in order to requisition a cheque and the slapdash “invoice” they e-mailed me immediately sent up red flags. So I checked with others in the organization (who knew more about our copier contracts) and our copier company and they confirmed they made no such call.

I then went and googled it to see if this scam had come up before and that’s where I found out how it worked. From that moment on, I would recognize the scam the instant they would call (it came up a few more times during my tenure) and they would always hang up the moment I indicated I wasn’t biting.

(Sidenote: Apparently, how the scam works is they send you junk cheap toner and then send lots of nasty legal threats if you don’t pay up. Most companies just pay the invoice without too much fuss because the people who know about copiers and the people getting invoiced for the toner don’t always communicate. By asking for an official e-mail invoice, I circumvented that because it became clear the company didn’t exist.)

Getting tricked is kind of human nature. Our brains have massive blind spots that are manipulated in a million ways every day. Scammers exploit gaps in safeguards and systems as well as flaws in human nature in order to overrule our reason and get us to do what they want. The reason this scam got as far as it did was I was in the middle of a busy day and just wanted to get off the phone.

It’s also important to mention nobody is immune so here’s a story of a scam I didn’t avoid. A few years later, my father passed away and I got signed up for a credit card I didn’t want in the airport while going to his funeral. I was in a fog of shock and was so focused on ending the conversation without bursting into tears that when I heard “would you like to sign up” I though I would get some pamphlets or the application would get declined, and just figured listing off my information was the quickest way to do it.

Turns out, I vastly underestimated how willing credit card companies are to give out cards on the flimsiest of information.

Now, I want to clarify here I call it a scam but everything was on the level. The woman selling the cards didn’t know my situation. She was trying to earn a living and I was trying to end the conversation as quickly as possible. There wasn’t any malice here and I wasn’t out any money (minus a yearly $30 admin fee for the card which I keep basically for emergencies). However at the end of the day, I had a credit card I didn’t need or want, so I got scammed.

I’ve since told people about it, but it was pretty embarrassing for a while because I pride myself on being at least semi-knowledgeable and aware of the potential to be suckered in. Everyone makes mistakes and has vulnerabilities that can be exploited.

Knowledge is a powerful tool. Once I learned how the toner scam worked, I was able to pass that along to others to close the gaps in operations that allowed it by arming other people with knowledge (this is also true for fending off multi-level marketing schemes but that’s a subject for another day). That said, I wasn’t able to stop getting scammed on the credit card I didn’t want or need, because even knowledge sometimes can get overruled by emotion or moments of vulnerability.

As a final post script note: Knowledge also meant I was able to fuck with the toner scammers by stringing them along whenever they would call me trying to do their pitch. Knowledge is armour, but it can also be fun.

Thanks for reading. Until next time, I’ll catch up with you further on up the road.

Follow me on Twitter (oof…this is awkward) @TheRogueTypist.

Yeah…I know.

That’s all for this time. I do want to dig into conspiracy theories in more depth in the future (especially how they flourish during pandemics) and the ways they exploit our brains as I think it’s interesting stuff to dig into. There’s a lot of crossover with the cognitive biases that social media exploits and conspiracy theories intersect with.

Latest from the Blog

Rogue Notes: It Survives

Hey look, this site still works! It’s only been (checks)…two years since my last post. It turns out that working full time leaves one with less energy for creativity. It also turns out that “betting on myself” didn’t result in making a living writing and a big part of my struggle in the five years…

Jurassic Park, John Hammond and the Price of Vision

Hold onto your butts! I wrote about Jurassic Park and the dark side of creative ambition.

Subjects include John Landis and The Twilight Zone Movie accident, New Hollywood and the dark side of auteurism.

Get Back

In this piece I take a look at Peter Jackson’s fly on the wall documentary series The Beatles: Get Back.

Spoiler alert: The Beatles were pretty good.

Leave a comment