GPT-3, Disinformation, and Creativity 📰
AI or human? 🤔 GPT-3 is currently the largest and most powerful natural language model in the world.
“I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”
That’s a totally reassuring, non-threatening, anti-singularity piece of text written by GPT-3 for ya 😳
But for now, let the humans do the talking 😁 Today we’ll be diving into the Generative Pre-trained Transformer 3, or GPT-3 — currently the largest and most powerful natural language model in the world, which can write realistic text from almost any genre ✍ — and how it relates to disinformation, creativity, and trends in AI and security.
🤖 GPT-3 and large language models
🗞 Disinformation (would you be fooled too?)
🎨 Can AI be considered creative?
🎙 Δx podcast
This means you can now listen and subscribe to the Delta X channel directly to listen to past and future episodes :) 👆
Choose your favorite podcast app and check out the newest episode featuring 2 guests from Black Hat USA 2021! Andrew Lohn and Micah Musser explain “Disinformation At Scale: Using GPT-3 Maliciously for Information Operations” - the double-edged sword of large language models. ⚔
How can GPT-3 be used to amplify disinformation?
What are the implications GPT-3 has for Artificial General Intelligence (AGI)? Can GPT-3 be considered “creative”?
What are the future trends for this technology, cybersecurity, and society?
Listen in to this episode for an additional surprise in the beginning 😉
💎 Δx takeaways
GPT-3 is an autoregressive language model that uses deep learning to produce human-like text, created by OpenAI. Or, as Micah puts it during the podcast:
“Autocomplete on steroids” 💊
GPT-3 can take text and twist it, using an architecture called the transformer. 🚗🤖 It leverages 2 big advances in the large language models:
😪 Attention mechanism (focus on important words and parts of the sentence, and forms the basis behind the transformer architecture), and
🏛 Unsupervised pre-training (instead of trying to match a desired output, compress and decompress each sentence — like compressing each sentence down to its meaning, then reconstructing it again similar to a translation)
It’s also HUGE — GPT-3 has 175 billion parameters. 😮
Here are a few examples:
In case you haven’t listened yet… here’s the podcast intro I generated using GPT-2 (the predecessor of GPT-3):
Pretty interesting, right?
GPT-3 can also be used for chatbots, and potentially in the future as personal assistants, law contracts, and even for areas such as music and code.
And this one’s a difficult one: would you be able to tell if this was AI or human? 🤔
In the example above, participant accuracy to distinguish was only 12%: most people thought it was written by a human and not GPT-3. ❌ In a generation where we typically believe everything we see online and fake news spreads like wildfire, imagine the problems widespread use of GPT-3 technology could create if used for malicious purposes. Disinformation techniques could be potentially used to twist politics, disseminate false beliefs, and other adversarial purposes.
This could become even more of a problem if combined with human disinformation operators who can scan for any mistakes in GPT-3 and correct it before publishing, in a human-AI augment. 📰
Because it can be so hard to distinguish between GPT-3 text and human text, Micah and Andrew explain how right now, the main defense mechanism has been relying on behavioral signatures (accounts which are behaving weirdly), rather than semantic content. 💬
When I first read articles written by OpenAI, one of my first reactions was questioning human creativity. If AI can write articles, generate code, and piece together music, what defines our boundaries of what makes a human or AI creative? 🤔 Are we just byproducts of our own training and neuron updates from the data we receive around us? 🧠
My instinct is to say in humans I’m not sure if there is creativity in a pure sense vs “what we are trained on”… I don’t see a first order reason why what it’s doing couldn’t count as creativity just as much as an artist pulling from an inspiration from their own ~ Micah
While this may be more of a philosophical question, it’s still an interesting one to ponder - what do you think?
In general, the trends are that language models will probably be getting bigger and more expensive in the near term 💵, and getting more powerful and multimodal 🤹♀️, affecting different areas of our lives.
There are lots of pieces of cybersecurity: vulnerability discovery, patching, phishing, intrusion detection, exploit creation - and each of those will be affected by AI in different ways. ~ Andrew
Want to receive more biweekly Delta X newsletters like this one breaking down tech, startups, & innovation? Take 1 second to drop your email below 👇
You can read more about GPT-3 and disinformation in Andrew and Micah’s report, or by clicking on any of the source links under the images above for articles about GPT-3.
📰 Δx change
🔦 Liquid light at room temperature: Light can be liquefied into a superfluid AKA a Bose-Einstein condensate, which may allow quantum computers to operate at room temperature. This is made possible through light-matter particles called polaritons.
🐜 Smallest winged microchip: A flying microchip spins like a helicopter leaf and are the smallest human-made flying structures. It allows for sensing environment “contamination monitoring, population surveillance or disease tracking.” Even better? It dissolves in water after they are no longer needed.
💧 Removing lead from drinking water: The new approach uses shock electrodialysis for 95% reduction of lead in the outgoing fresh stream. It’s cheap and low energy.
Hope you enjoyed this edition of the Delta X Newsletter, and have a great week!
Thanks for reading!
Thank you for being a part of the Delta X community and reading all the way to the end! :D