Is machine learning just a beast consuming all of our human creativity? (Credit: Timothy R. Butler/Stable Diffusion)

Beware Regulating AI

By Timothy R. Butler | Posted at 9:42 PM

The AI revolution is a threat for artist and information gatherer alike. Like a speeding train, machine learning threatens to disrupt the work of a huge number of workers, and thus the “R” word has started to appear with increasing frequency: regulation. Such does not bode well for the futures of any of us.

Perhaps more than any other machine learning product to date, ChatGPT has made clear the threat of this rapidly advancing realm. Able to author everything from better-than-Google answers to questions to well-structured sermons, an unimaginable swath of us who share ideas for a living are certainly standing in front of the speeding AI train. Meanwhile, tools like Stable Diffusion threaten artists and others working in the visual arts.

The idea of computers doing some improvements to human created works is not new, of course. Long before anything we could meaningfully call machine learning, we saw computer algorithms for light image enhancement, grammar checking and other such things. These tools have had varying quality, but notably did not do one thing: create stuff.

A few years ago, I first tested Topaz’s Video AI (nee Video Enhance AI); I was putting together streamed ministry events during the pandemic and several of the contributors had shot their video clips using less-than-ideal cell phone cameras. Topaz’s tool was able to “restore” detail in an uncannily accurate way — for example, woodgrain that was a muddy mess became lifelike again. More accurately: the tool recognized where woodgrain should have been had the video quality been better and created it anew.

I was so impressed, I purchased Topaz’s suit of tools and now use them regularly with both photos and video. Any creative worker editing visual arts should at least try the demos of Topaz’s Video AI and Photo AI. While imperfect, they almost always can take unusable (or barely usable) photos and videos and improve them.

Still, improving, mostly, and only creating in an incredibly limited sense.

After an intriguing, but restricted set of previews that the public glimpsed, such as Dall-E, 2022 was the year of public access to creative machine learning with Stable Diffusion placed entirely out in the open and ChatGPT at least being available for anyone to play around with.

With this popularity comes controversy. “Machine learning” is a technological attempt to mimic human learning. Thus to make a “model” that generates written words or visual art, the computer must be exposed to existing works. Frequently these learning experiences are created by indexing the public Internet, much as Google and other search engines do. That means these tools “learn” from copyrighted works and thus influential copyright holders have started outcry about copyright abuses.

At first blush, we might be sympathetic. Why should I, as a human creator, create a painting or photograph or essay or poem only to have a machine “digest” my work and create more works based on my own? If someone wants to learn from me, they out to license my work, right?

Well, wait a second. Imagine the art style of Norman Rockwell or the writing of J.R.R. Tolkien. OK? Got that? Now, show me your license for it. What? You haven’t ever paid for a “digest license” to a Rockwell painting? You just saw the ones that were online? How about Tolkien? Did you ever sign an agreement to allow you to dissect his style? Of course not!

Copyright holders and lobbyists that want regulators to require such licensing on machine learning are asking for artificial intelligence to be prevented from “learning” the way that learning normally happens. If you’re a painter today, you’ve almost certainly been influenced by art you’ve never purchased, but saw from your earliest days, perhaps in museums or around the homes you lived and visited.

No man is an island; no painter paints, no writer writes and no musician composes in a vacuum. All of us owe a debt to contemporary and past “creators” when we weld the tools of our own craft.

If computers are ever going to truly learn, they must likewise be allowed to absorb the “air” in which we live. To argue that machine learning cannot learn from publicly accessible materials in the same way human beings do will hinder the growth of some of these incredibly exciting breakthroughs in technology.

Machine models steeped only on public domain materials would create stilted tools dumb to the present reality in which we live. Imagine a person who grew up today while completely forbidden to read or see anything that happened after 1927, the present cutoff for public domain works in the United States. Such a person would need a great deal of help to “catch up” at some point.

Occasionally, someone will propose that Google should have to pay to index web sites or Facebook should be charged when people share news articles — these proposals, thankfully, have rarely gained traction because they are absurd. While anything even slightly polite “crawling” the Internet obeys so called “robots.txt” rules that can prohibit indexing of a particular site, outside of that, an important part of making the Internet discoverable has been that we allow computers to gather publicly visible information on the Internet reasonably unimpeded.

(This ultimately benefits even copyright holders, since it means their materials are discovered.)

As Microsoft’s announcement of ChatGPT integration into Bing hints, machine learning is going to play a huge part in the next generation of Internet search and discovery. Overzealous copyright protection would hinder improvements to search and only secure Google’s near monopoly on search.

Given Google’s stranglehold, even the behemoth of Microsoft has struggled to compete with the search giant. It is in no one’s interest — other than Google’s shareholders — for Google to remain the unchecked gatekeeper of finding information on the Internet. Reports suggest that Google is worried about the potential of ChatGPT to replace its search engine as people’s go-to for information. For the rest of us, that should be cause for celebration because a Google that is not securely in control is a better Google living in a better world with Google alternatives.

Overzealous regulation of machine learning models and their access to publicly visible information would be fantastic for a few, like Google, and horrible for the rest of us. If such laws were to come into being, Google could afford to license important information and so could Microsoft and Apple and Facebook. In other words, Big Tech would hit a speedbump, but it would keep going, while those with shallower pockets would be sidelined.

In my piece defending Apple as a good choice for privacy seeking computer users last week, I mentioned the importance of Apple’s work to democratize machine learning through its advanced chipsets. While most of Big Tech loves centralizing data under their control (Google and Facebook, I’m looking at you), Apple has been pushing the leading edge of local machine learning that happens on your own devices.

The importance of Apple devices having machine learning oriented “neural engines” is that these advances in machine learning can now happen on your own device with your own data staying put in your hand or on your lap. Apple’s recent contributions to Stable Diffusion is an example of a step even further, as the company not only uses its tech prowess to bring all of us technology capable of decentralized machine learning, but also helps Open Source software leverage that tech.

In an era of mind-blowing leaps in computers “thinking,” we want that thinking to be done in the open. Models like Stable Diffusion can be explored and extended by anyone with programming knowledge, allowing average people to stay on top of the benefits of this realm.

However, open models will be the technological equivalent of that person stuck studying pre-1927 works if draconian copyright laws come into play. Only proprietary models, the ones controlled by Big Tech (or Big Government), would have access to much of the relevant knowledge in the world. And, with that, Big Tech would increase its dominance in our lives — the exact thing Apple has kept in check by its measures of offloading Big Tech power to its individual, user-controlled devices.

Publicly available models for normal people like you and me would fall increasingly short of those more advanced proprietary ones, leaving the owners of the latter unchecked to study everything about us and use it to their advantage against us.

What might start as a protective measure for independent artists and the like could quickly become a cudgel against them as the few entities with sweeping, expensive licenses to human knowledge and art could generate new works that increasingly crushed the indie creators who could only use hobbled tools. Whether or not AI ever becomes Skynet, the best weapon against AI becoming anti-”normal people” is for normal people to have access to the best of the technology.

That goes for protecting against Big Tech, but doubly so against governments. Government, of course, will do whatever it wants. No matter the copyright laws, we should assume the NSA, CIA and their peers around the world will continue to scan wildly any and all information under the guise of protecting us from rogue actors.

Imagine a world where government increasingly has artificial intelligence capable of predicting what people do, but people have only the stone age equivalent? Even if such a Skynet world was kept on a leash where human government officials controlled it, we might then find ourselves in something akin to Minority Report.

End-to-end encryption is a check on the government’s increasingly advanced surveillance powers. Does it allow some evil to be cloaked in its protection? Sure, but that’s the cost for a free society. Likewise, will broadly deregulated machine learning allow for some copyright abuses and other unfortunate things? Yes, but it too will protect free society.

(Before you say “Well, just restrict governments’ access to machine learning on copyrighted materials, too!” consider: even if all the free world adopted such a path, the authoritarian Chinese government only shows care for intellectual property when it suits their purposes. If countries like the United States tied their technological hands behind their backs, that would just hand AI superiority to the increasingly bellicose China.)

To be sure, there are risks to allowing machine learning to continue its current path. Will people start favoring AI art over human created art? I remain dubious AI will ever truly replace human creativity, but like every major technological advance we’ve witnessed over past centuries, it will disrupt some people’s work. Those risks are just much, much smaller than letting that same progress roll on — which it will whether we like it or not — but only allowing an elite with enough money (Big Tech) or restriction evading power (Big Government) to be the keepers of that progress.

Machine learning is here to stay; let’s at least make sure it is here openly and accessibly.

Timothy R. Butler is Editor-in-Chief of Open for Business. He also serves as a pastor at Little Hills Church and FaithTree Christian Fellowship.

"
Share on:
Follow On:

Join the Conversation

1 comments posted so far.

Re: Beware Regulating AI

I believe that AI shouldn’t affect the jobs that include creativity, AI can produce good work but creativity cannot be replicated just like in the case of web development. Because in cases of custom website developments, AI would struggle. However, readymade templates would be the best AI can produce.

Posted by jack eduardo - Apr 10, 2023 | 3:19 PM

You need to be logged in if you wish to comment on this article. Sign in or sign up here.