This is a hotly debated question and there isn’t really a single answer. However, if we were to try to answer the question directly, the answer would be “from everywhere.”
The Birthplace of AI
Coding and Code Libraries have long been shared between developers and programmers using platforms like GitHub. Coding that is publicly available is referred to as “open-source” coding. GitHub was created to be a repository for open-source coding where people could share their code and it could be used as a basis for other codes.
As an example, a developer may have made a simple open-source cryptocurrency wallet program that works with one of the popular blockchain tokens and posted the wallet code on Git. Then another developer needing a wallet used the previous developer’s wallet code and added their own coding to it to make it work for their needs.
Typically, the second developer will give credit to the first software developer (which is part of the free-to-use license agreement), and likely also post the new version of the cryptocurrency wallet on Git. It’s very similar to when people make quilts. My mom used to have craft parties where her girlfriends would all come over and they’d talk about their kids and stitch together small squares of material. Each square of material would eventually be stitched all together to make a quilt. Open-source coding works in this same way. Virtually all programs out there from Chrome and Windows to Android and Apple OS have at one time or another incorporated open-source coding into their development cycles.
The same is true for the code libraries that developers use for writing or editing code. These libraries are open-source as well. They contain the basic building blocks for the coding language that makes up code strings and eventually the entire code itself. In fact, the popular ChatGPT site was taken offline because of a vulnerability that was discovered hidden inside the open-source coding that was used in part to create the chatbot.
The take-away here is that even when a company boasts they’ve created something, there is a very high likelihood that they started down that development path using open-source coding. Why create something that already exists and performs the functions you need? Further, open-source coding can be used without a license. Simply credit the prior developer(s) whose name(s) appear in the code you’re using.
Unfortunately, this isn’t done as often as it should be.
Why is this important?
It’s no secret that A.I. is a powerful tool. The real secret is that the code to create these powerful machine-learning tools is out there and available for use for virtually anyone to use at will. On its face, this isn’t really that troubling. What is troubling is that these powerful tools are out there and can easily be accessed and misused by parties with nefarious intent. Or in layman’s terms, much of the code base for A.I. can be easily attained by bad actors.
What makes this especially troublesome is that the purpose of A.I. isn’t to generate chatbots that write blogs and answer questions in paragraph form like a smarter web browser. The purpose of A.I. is to imitate human intellect and behavior. A quick web search for A.I. provides dozens of various definitions that explain A.I. as “the theory and development of computer systems able to perform tasks that would normally require human intelligence.” With A.I. working to imitate human behavior, what happens when bad actors get hold of the open-source code base to create the A.I. and to shape it into a nefarious tool?
Hacking and digital theft are at an all-time high. Identity theft, cryptocurrency theft, phishing and email scams and other types of digital theft are constantly dominating headlines and impacting tens of thousands of people daily. Each one of us can recall some major breach at some major company or organization where millions of users’ information and data has been seized.
The Pitfalls of A.I.
Consider just how elaborate an A.I. generated scam would look. But instead of looking at the normal scams that hackers use such as email phishing and links to downloadable executable files that hold your computer hostage, let’s look at something deeply unsettling by adding “deep fake” technology to the equation.
A “deep fake” is the process of manipulating audio and/or video digitally to replace one person’s voice and/or likeness with another. Now let’s make this replacement a famous person. Next, we make this deep-faked famous person say something that is incredibly controversial or inappropriate. Then, we’ll release this new content through social media, and sit back and watch what happens when people see this incredibly convincing forgery in action. Even with a cursory glance, this is extraordinarily troublesome.
Another concern stems from the use of “voice-activated” technologies. We’re all familiar with “Siri” and “hey Google” and all the other voice command devices that are hanging on our every word. Since most (if not all) of these gadgets are able to capture and our voices and those of our family and friends, what happens when A.I. gets hold of our voices and can imitate us? What would happen if one evening you get a call from a family member or friend that has experienced and emergency situation. Since you recognize the voice on the other end of the call you immediately spring into action. But what if that voice was A.I. generated and is intended to draw you into a scam? How can we possibly know?
Coming to Grips with A.I.
Perhaps some of you will recall the story of Orson Well’s famous 1938 radio broadcast of “War of the Worlds” which left many people believing that Martians had landed on a farm in Grovers Mill, New Jersey. The aftermath of this Halloween radio broadcast led to public outcry against broadcasters and calls for regulation by the FCC. Retrospectively, it’s easy to see how something as seemingly benign as a radio show can have such a significant impact when people believe what they’re hearing.
And just as the FCC stepped in following the controversy from the War of the Worlds Halloween broadcast, governments are starting their processes to place laws and guidelines around A.I. and other new and emerging technologies. Unfortunately, it’s slow going. Take cryptocurrency as an example. Governments and regulators are still navigating the legal side of how to manage and regulate crypto, and Bitcoin launched in 2009. A quick peek at my calendar tells me we’re already approaching summer of 2023.
Technology will continue to rapidly evolve. Likely, far quicker than our ability to put protections around these new innovations from a legal standpoint. Had this been a normal tech-blog about security, this is the point of our journey where I’d tell you to update your passwords and keep your digital world on lockdown. I’d encourage you to remain vigilant and do what you can to protect yourselves from potential harm.
Unfortunately, A.I. generated scams won’t need passwords or malware to get our cooperation. We may fall prey to them simply because we believe they’re real.
Bates, Philip. “5 Ways Hackers Use Public Wi-Fi to Steal Your Identity.” MUO, July 4, 2022. https://www.makeuseof.com/tag/5-ways-hackers-can-use-public-wi-fi-steal-identity/.
262588213843476. “Everything I Understand about Chatgpt.” Gist. Accessed May 19, 2023. https://gist.github.com/veekaybee/6f8885e9906aa9c5408ebe5c7e870698.https://gist.github.com/veekaybee/6f8885e9906aa9c5408ebe5c7e870698
www.ETCIO.com. “Data Breach: CHATGPT Was Always Prone to Open Source Code Related Vulnerabilities – ET CIO.” ETCIO.com, March 31, 2023. https://cio.economictimes.indiatimes.com/news/next-gen-technologies/chatgpt-was-always-prone-to-open-source-code-related-vulnerabilities/99132311.https://cio.economictimes.indiatimes.com/news/next-gen-technologies/chatgpt-was-always-prone-to-open-source-code-related-vulnerabilities/99132311
Midjourney, Generated by, and OpenAI. “CHATGPT Can Now Access the Internet and Run the Code It Writes.” New Atlas, May 10, 2023. https://newatlas.com/technology/chatgpt-plugin-internet-access/.
Caballar, Rina Diane. “Ownership of Ai-Generated Code Hotly Disputed.” IEEE Spectrum, March 29, 2023. https://spectrum.ieee.org/ai-code-generation-ownership.
Fung, Brian. “Biden Administration Unveils an AI Plan Ahead of Meeting with Tech CEOS | CNN Business.” CNN, May 5, 2023. https://www.cnn.com/2023/05/04/tech/white-house-ai-plan/index.html.
Magazine, Smithsonian. “The Infamous ‘War of the Worlds’ Radio Broadcast Was a Magnificent Fluke.” Smithsonian.com, May 6, 2015. https://www.smithsonianmag.com/history/infamous-war-worlds-radio-broadcast-was-magnificent-fluke-180955180/.