Publisher Zack Abrams reflects on the publication of his latest article twenty-four hours ago. 

Allow Me To Begin With An Apology

I’m sorry. Yesterday, I published an article on this site entitled “A Discussion Of AI, Truth, And Lies,” a review of a virtual talk which was presented by the Journalism school. To be honest, it wasn’t my best work; there are some noticeable grammatical errors, and it could’ve been organized much better. But that’s not what I’m apologizing for. 

Once again, I’ll be upfront: that article wasn’t just my best work, it wasn’t my work at all. Well, I contributed in a small way, but all the heavy lifting was done for me. I passed it off as my own work, despite knowing full well that it wasn’t. It’s something I was repeatedly told never to do throughout my years of schooling and internships. But that’s not what I’m apologizing for either. 

The article isn’t my best work. It’s not even my work. To be completely forthcoming, it’s not even an article. Sure, it has the shape of a Bwog article: the first two lines are italicized — that was the part I did, actually, along with the title — and the following lines give an overview of the event, wherein Professor Martin Waldman discusses the dangers of AI. The article even sums up the Q&A at the end. Only: there was no Q&A. The quote attributed to Nietzsche, “there is no intelligence other than that which is built by the wisdom of others,” was never said by Nietzsche, or in fact by anyone at all as far as I can tell. Prof. Aida Di Stefano, Director of the Center for Ethics and Knowledge at the Committee on Global Thought, didn’t introduce Waldman, not out of professional jealousy or anything, but simply for the fact that neither of them have ever existed. 

These lies, cleverly contorted into the shape of truth, were the invention of an AI which I trained with the complete text of every Bwog article ever published, prompted with the aforementioned italicized lines, and instructed to make five attempts to guess what should follow. The very first thing it spit out was the event review I published, with only a few minor edits, made by a Bwog editor who was unaware of the article’s true origin. I titled the piece “A Discussion Of AI, Truth, And Lies.” Consider this part two. 

Oh, and I do apologize for lying to you. But, in my defense, I’m telling you about it now. The next guy won’t. 

Creating Martin Waldman

The hardest part was the wait. Oh, and resetting the credentials to Bwog’s database. I happen to be a Data Science major, but anyone with rudimentary tech skills could’ve done what I did. I didn’t even need to run it on my own hardware; it all happened on Google’s servers, for free. You can see my exact process right here; the only thing I supplied on my end was the .csv with the complete output of Bwog since its inception. 

Here’s the recipe for making an AI: first, add data — the more the better. Next, set Google’s processors to high and bake for several hours (seven in my case) depending on the amount of data and how fine-tuned you want the model to be: like a cake, overdoing it can be just as bad as underdoing it. Finally, let cool as you back up your progress. Now you’re ready to generate: give it some text at the start to guide it, if you’d like, and play with the parameters until it sounds like your source. Voilà! Easy-bake Skynet. 

Now, the neural model isn’t being created from scratch; with this method, I simply fine-tuned an existing model formally called Generative Pre-trained Transformer 2, but best-known by its nickname: GPT-2. Published by OpenAI, an artificial intelligence startup founded by Elon Musk among others, GPT-2 was trained on a dataset of 8 million webpages — Bwog’s articles number only in the tens of thousands, so think of the fine-tuning as giving someone lessons in your local accent after they’ve already learned your language. 

GPT-2 was published about a million years ago, back in February of 2019. Its next evolution, GPT-3, has been in early access for a few months, and it’s already proven itself to be much more capable than its predecessor; alas, trolling the readership of a college blog did not register as an important enough mission to qualify me for an invite to the beta before Microsoft bought the model outright. Still, GPT-2 has shown itself to have a capable grip on Bwog’s voice — it’s a quality we share, and one I still hold in high self-esteem, though I have to admit my pride is a bit wounded. 

Back in 2017, with the help of a simple tutorial, I set up a Twitter bot, appropriately named @notbwog, which tweets imitations of Bwog tweets. It tries its best, anyhow. @notbwog is based on a comparatively simple Markov Chain model, by which it basically just mashes up different tweets — and only the latest ~2500 at that — in a crude attempt at sentience. I was working on upgrading the bot’s brain to GPT-2 when I had the idea for this article; @notbwog’s brainlift will have to wait until winter break. 

the amazing rAndI

So, why have I trolled you in this way? First, tell me: are you laughing, angry, or scared? Personally, I oscillate between the three faster than it takes rAndI — that’s what I’ve named it, by the way — to come up with a suitable metaphor for this scenario. (rAndI’s best guesses: “faster than my energy can be sustained,” “faster than I can say,” and “faster than you can tell.”) 

Honestly, I think everyone should be feeling a healthy mix of all three. We’ve heard so much about fake news and disinformation farms; now rather than tilling the soil, bad actors can merely pull some weeds here or there. Great advances have also been made in photo and video manipulation, often called DeepFaking, and audio manipulation too. Computer-generated characters were once accessible only to big-budget Hollywood studios; now, a new generation of online creators display their virtual bodies 100% of the time, and have found an audience of millions. 

Of course, this is far from the first time in human history when bad actors have attempted to exploit the public’s benefit of the doubt for personal gain. Back in the ‘70s, self-proclaimed psychics, most notably Uri Geller, gained notoriety for their feats of telepathic ability — bending spoons and turning phone book pages with their minds. Or so they claimed, anyway; career illusionists recognized their bits instantly as rebranded stage magic tricks. 

James Randi had started his career as an illusionist dubbed The Amazing Randi; however, as these “psychics” began to capture the attention of the public, Randi set out to correct the cosmic record. Randi surprised Geller by showing up on The Tonight Show, at the request of Johnny Carson, who was also skeptical of Geller’s claims. Under Randi’s watchful eye, Geller was unable to set up his illusions, and failed repeatedly to conjure up any psychic powers. 

This strategy arguably backfired, as Geller only became more famous, but I sympathize with Randi. Randi loved tricking people — he died this October, at the age of 92 — as long as he knew that they knew they were being tricked. It’s what he sold, and it’s what they bought. 

I probably could’ve left my fake Bwog post up for months, maybe years, without any readers noticing its inaccuracies, despite the lies’ inherent vulnerabilities to any number of simple Google searches. It wouldn’t feel right to me, though, not when I have the chance to inoculate a group of people, no matter how small, against the coming flood of AI trolling, spam, and hate. I’ve given you two shots, back to back; you’ve seen it, and you’ve fallen for it. Hopefully it’s given you some measure of protection against the real thing.