paint-brush
An Artificial Discussion Regarding Artificial Intelligenceby@brbs
887 reads
887 reads

An Artificial Discussion Regarding Artificial Intelligence

by Tyler BerbertMarch 29th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

AI is a truly revolutionary technology on a longer timescale. Once you know the facts about it, the conclusion is inescapable: its current capabilities are being overblown to sell products. The more we know about how, the better positioned we’ll all be when AI actually does approach human level intelligence.
featured image - An Artificial Discussion Regarding Artificial Intelligence
Tyler Berbert HackerNoon profile picture

To talk for real about artificial intelligence, we’ll have to use the term “machine learning” too. AI is the big thing people are using machine learning for. Machine learning is AI’s foundation, the math and algorithms that power it.


They aren’t the same, but their Venn diagram has enough overlap that it doesn’t really matter for our purposes.


Basically, there’s math that describes curves. Remember parabolas? Horseshoe-looking things where the ends keep going?


You can model a ton of things in real life with equations of curves, and specifically, with absurd amounts of them layered in absurdly complex ways. Computing power is cheap now. That plus the Internet’s ability to create massive amounts of training data is quite a combination.


People who do this modeling are typically trying to find some optimal value on many curves at once — the endpoint, the top or bottom of the parabola, the finite end of the horseshoe.


Some curves are usually more important for the final output than others; you want to get really exact about optimizing some and can be fuzzier about optimizing others.


You can have the computer learn the correct “weights” to put on different ones. You can have it narrow down the modeling equations themselves.


In the end, the computer finds an optimal set of values in a way that optimizes some end result. This is the “magic” (heavy quotes) of machine learning.


You can take an input — a verbal prompt, a set of data about your preferences — and make an output that’s pretty lifelike in some dimension — a drawing, a celebrity’s voice, a song recommendation.


Computing power may be cheap but paying humans to do all that math, or even program that math into a computer, gets expensive.


Smart people figured out how to make machines do a lot of that legwork for us: there are ways to make machines “travel along” equations of curves and follow them “downhill.”


This has customized your Google results for a couple of decades. Since then, there’s been a graphics card arms race. There’s been AlphaGo, which beat the world’s best Go player in 2015.


There have been Boston dynamics robot dogs. There have been apps like Shazam and Siri. There have been drawing programs like DALL-E and Midjourney.


All this is cool, but the money has not moved far from search results. It has poured hand over fist into an adjacent thing: showing people things they’ll like on social media. Things you’ll spend time looking at, and ads you might click on.


The wider concept of AI, meanwhile, has been a staple of sci-fi for somewhat longer, and it’s interacting with this “popping off” the field has experienced in the last five to ten years. We see this exponential growth and think of “The Terminator.”


The people selling AI are happy to let us think this, to skip some key facts, so we’ll believe their claims of “yes! compare us to Skynet from the Terminator movies! [Crazy Invention #4852], which my company builds, is 5-10 years away tops!”


I understand why people would see the admittedly rapid growth in what it can do and think we’re really on a precipice of giving it consciousness.


Even if that precipice is more on the scale of 200 years instead of 2 or 20, that’s a pretty significant thing compared to the length of time we humans have been around.


We really could, on that timescale, break past the point where AI programs are just “autocomplete-for-concepts-instead-of-words” like they are in ChatGPT or Searle’s Chinese room argument.


We may actually put the right programs on the right biologically-inspired hardware and truly mimic the human mind, raising questions of humanity and sentience raised by Philip K. Dick and others.


If and when AI truly rivals our emotional and perceptual intelligence, possessing an unbroken consciousness like ours instead of mere skill in specific domains, we’ll be in for a reckoning.


It will not be today, tomorrow, next week, or even next year.


The massively parallel (meaning, in this case, working at the same time) neural connections we have in our brains — this is our castle’s moat, so to speak. It’s what makes “artificial general intelligence” years or more likely decades off.


Deep learning, the branch of machine learning behind a lot of its recent advancements, is a cool name for the statistics and calculus of mimicking input-output relationships we see in the real world. It does not — not yet, at least — mean for computers are “deeply learning” anything.


As impressive as it is, it’s math happening in a computer, in a way fundamentally alien to how “math happens” in your brain.


The brain uses shockingly little electricity. Data centers use shockingly large amounts. The latter are brute force machines. They process fewer threads, and less flexibly than we do; each one is just so fast.


It calculates and narrows down possibilities so quickly that even the best chess and Go players can’t keep up.


Its Achilles heel is the fact that it requires fairly well-defined problems, fairly clean inputs, and outputs. A board game. A robot not falling. It can’t deal with an absurdly diverse and quickly-shifting set of “optimization” problems at once, switching between them all the time.


Whatever AI’s strengths and flaws in individual areas, it can’t bind many areas as we do, changing focus as needed, adapting and transferring behavior across domains. It can’t do this at the level of a human toddler.


A big target in AI research has been “one-shot” or “few-shot” learning, meaning not requiring massive data sets and training times to learn how to do something. There’s been a success here; it still lags far behind our brains.


Automatically, we’re good at few-shot learning in many domains, and from an early age. We learn rules and strategies from almost no training data quickly. You can show a kid how to use an iPad, ride a bike, tie their shoes, or explain the moral of a story (not just its plot summary).


You can barely train a computer to do one of these things well. If it could do cognitive reasoning, locomotion, and human interaction without being terrifying, we’d be using it for all those things already.


Businesses are allergic to messy human labor, always striving for cheaper machine versions. They would have made this happen. They haven’t.


Computational neuroscience labs in universities are the ones interested in modeling the internal thought and reasoning our brains do about the world. It’s not the aim of the models that blend two people’s photos to see how their babies would look.


Likely for decades, it’s only us humans who’ll have a lock on navigating the world in fuzzy, adaptive, cobbling strategies-together-on-the-fly types of ways.


This general ability to learn about and navigate in the world, starting from a base of emotions and experiences, is not something that the tech industry has found useful when building AI.


Sparse are the ways in which real brain-mimicking has happened in machine learning so far. Copying the brain has not been necessary, which is to say profitable.


Companies (and the universities they fund) have instead optimized for things more measurable: the number of clicks on a social media feed or the accuracy of a drone strike.


If it has tried to mimic us, it’s been a profit-driven jab at obtaining the results of certain things brains can do. Some impressive and interesting forms of pattern recognition happen in computer vision.


All this is to drive home the point that AI is neither “magic that makes computers come to life” nor “magic that some people can do with computers.” It’s brute-force math. Knowledge mixed with computing power. Leverage. Most AI models are crunching numbers to do one thing: make money.


People with this golden goose, this powerful tool, are economically incentivized to claim AI is just around the corner from doing something cosmic or apocalyptic in order to hide the more real, mundane, less attention-grabbing things it’s doing for their business.


They have a vested interest in claiming it’ll be capable of certain big things in the next 1 or 5 or 10 or 15 years. These are timelines that investors like. Full self-driving cars not panning out is just one of the first flops we’ll see on this front.


Lofty sci-fi premises and promises make for good movies. Ex Machina, Her, Blade Runner. That doesn’t mean they’re true.


When we accept A.I. developers' own framing of their products as (1) inevitable and (2) politically and economically transformative, it becomes easy to elide the obvious fact that the forms A.I. takes (i.e., as chatbots! As "search engines"!) and the uses to which it is put (i.e., the jobs it will augment or replace! The tasks it will make easier or harder!) are contingent on the political and economic conditions in which it emerges.

I’m open to the possibility that we rest on the edge of a precipice—that a world “unrecognizably transformed” by large language models is only a matter of months away, as Paul Christiano seems to believe. But a basic rule of thumb of this newsletter is that things change slowly and stupidly rather than quickly and dramatically, and a proper A.I. criticism needs to account for this likelihood. For now, I am filled with resentment to find myself once again in the midst of a discourse about technology in which the terms and frameworks for discussion have been more or less entirely set by the private companies that stand to profit off of its development and adoption.


— Max Read, What Facebook criticism can teach us about AI criticism


Let’s take stock for a second. How, historically, has this gone for us, accepting the “terms and frameworks for discussion” that the technocratic elite imposes on discussions of technical matters, be it cryptocurrency or banking?


How many economy-crashing bank crises happened between FDR’s post-Great Depression reforms and the 80s? How many happened since then after Reagan rolled them back?


Max Read cites the excellent Harper’s article by Joe Bernstein about how Facebook has already done this behind the curtain; it has sold an entire class of people on a shaky and increasingly untenable model of its own ads’ effectiveness.


Just because a pyramid scheme is based on an actual product doesn’t mean it’s not a pyramid scheme.


A big point here is this: tech is not “better” than finance when it comes to misleading the public about its offerings. It’s arguably worse. You saw what happened to cheap blood testing, coworking spaces, and blockchain. Fine industries are tainted by foolishness.


Look at how they massacred my boy.


They mess things up internally, but in ways that affect all of us, because our friends and neighbors buy into their bullshit and let them set the terms of discussion. Then we pay for their golden parachutes.


Millions lose their livelihoods in economic events that could have been avoided with a couple of well-placed regulations. Tale as old as time.


It behooves us to try something new with AI. It pays to give our future selves the freedom to make sound decisions about it, based on sound information. It pays to learn the truth about it. Taking others’ whims about it at face value ensures you’ll pay a steeper price in the future.


People are getting good at using ML and deep learning for all kinds of things — making computers recognize faces, replicate voices, and show people content that will keep them scrolling.


Those uses won’t get better or more helpful for our lives, rather than making some people very rich, unless we make it so.


In the meantime, of course, they can be very entertaining.


AI can be used for good, evil, simply weird, or really anything, depending on human conditions. At present, this just means it all depends on where the profit motive leads.


Most AI expertise has been swept up by companies trying to squeeze us for dollars. There is simply not as much money in public, transparent, people-driven efforts to make life fundamentally better (formerly the domain of government) or even in brain simulation.


It’s in bread and circuses and military uses. These types of conditions are what make AI, and any other technology, go in the direction it does.


AI is out there now. That fact is water under the bridge. Talking about it like it’s magic only benefits the people who have the most to gain from bullshitting about it.


Dressing AI up in big heavy talk, in this case, doomsday talk, preying on people’s uninformed interest in it, is a pattern we’ve seen before.


“Theranos/WeWork/NFTs will change everything.” Correction: they could have changed a lot. Instead, they oversold themselves. They overpromised and underdelivered.


This hysteria around AI, overselling both its capabilities and risks so that some people get richer, is tough to disentangle from people just being naturally interested in it. It’s an interesting thing with a wide range of uses.


People are going to buy into it regardless, as they did with crypto and NFTs, and probably more so. All the more reason to know the truth about it.


If you take nothing else away from this post, take this: how we use AI on each other is, at present, a vastly more pressing issue than anything about AI growing conscious. Think about it: if it did become conscious, and wanted to harm us, why? Why might it want to do this?


Does it not seem patently obvious that the way we bring it into the world will have some effect on its stance toward the human species?


Does it make sense to entrust this, and the narratives around it, entirely to CEOs and tech-adjacent capital owners, a group with a terrible track record on this and proven to have a higher incidence of psychopathy than the general population?


Do we want them in charge of its usage, its research, and development, right up until the moment it becomes sentient, be it in 2030 or 2230?


As Read pointed out, AI fearmongers find it convenient for people to not think about this.


They find it much more convenient for this fact to remain hidden so they can talk about its advancement being inevitable, not worth questioning, and the Singularity being right around the corner.


There’s nothing inevitable here except what humans make inevitable. Gorillas aren’t working on this.


If and when artificial general intelligence arrives, it’ll be a question of astrobiology, of an alien life form. Until that moment, it’s one of philosophy, of history, of the humanities. Will we make it like us, in our image, perhaps even better than us?


Will we make it worse, a mirror of our most sociopathic tendencies? The crazy thing is: humans decide. We’re the builders. We’re exercising control over how alien or humanlike it is.


Except, of course, “we’re” not. Some people are more than others. Are we okay with that? Are we okay with who those people are? Are we okay with where they’re directing this technology?


Having seen what the high-capital-owning class tends to “make inevitable” — its prognostications in other industries, its short-sighted and fact-ignoring and herd-mentality practices that have crashed so many billion-dollar enterprises — some skepticism about how they talk about AI is warranted.


Also published here