What makes human, human?

Some words on AI

Theresia Tanzil
12 min readJan 28, 2023
Photo by Andrea De Santis on Unsplash

“AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’.” — Donald Knuth

Hey. Been a while since I published anything here, and my brain is getting constipated with stuff that will just get more tangled the longer I wait. So let me just share whatever I’m up to these days. Start anywhere, they say.

I’ve spent the past month in this rabbit hole of AI. You will have undoubtedly felt the AI buzz lately. Chat GPT, Midjourney, Lensa, Stable Diffusion, and most recently: Atlas from Boston Dynamics flexed its move again. The trail burns slowly as mainstream users get exposed to and adopt these generative AI products throughout 2020 until the end of 2022 when the hype finally exploded.

AI has finally hit the mainstream collective consciousness.

All the underlying technologies have been here for a while but it took the right UI and UX innovation (chat interface, image editor app, Discord prompts) to unleash the adoption rate and open up the world’s perception of what AI is already capable of.

Without getting into the detail, do know that this explosion is also enabled by the advancements in two other lesser-visible building blocks in tech: data (needn’t say more) and compute power (thx gaming and crypto). I also won’t get into the technical details of the concepts and alphabet soup of AGI, ANI, LLM, and RLHF here but today I do want to explore this question: what about us, human?

What is this thing we creating here and how should we respond? Given this does for brains what the Industrial Revolution did for muscles, how do we view knowledge work now? What is still worth doing and what is not? What should we do with all this extra “mental power”?

What is thinking? What is creativity? What is art? Do we merely play with definition and semantics? What makes human, human?

What is scarce and what is abundant now? What opportunities have appeared? What do we need to put in place? What does the game look like right now?

I want to riff on some basic principles and unpack ideas built upon my interests in tech, sociology, and psychology. Let’s see if we can arrive at some useful lenses we can put on to navigate our own context, what it means for us personally, and brace the impact.

Also, pardon the highly-philosophical vein that this post runs on — can’t help it. Well, this IS philosoraptech.

First sticking point: employment. The extreme interpretation is that “machines can do everything, we’re all gonna get replaced”.

Fear.

At this point, what’s more likely is: they can do most of anything and I think they should be able to do more things than they currently can.

Yes, still not helping with the fear, I know.

But the broad agreement in my social circle (your run-of-the-mill white-collar mix of technologists, creatives, artists, and intellectuals) is that there are going to be less people who will be replaced by technologies than the number of people who will be replaced by their peers who use these technologies.

Succinctly: AI will not replace doctors. Doctors who use AI will replace doctors who don’t. — well, until we are able to mathematically unbundle, represent, and model the different functions of a doctor ;).

As with most technologies, those who survive and get ahead are the ones who think opportunistically.

Adapt. Surf the change. Learn, fast. Very fast.

Of course, this is a massively gross (and rather snobbish) oversimplification. There will be blood.

Historically speaking, humankind as a group is bad at acting swiftly and coming up with the right governance — if there is such thing as a silver bullet at all — until it’s right in front of us.

There will be real people taking a hit, no matter what they do or don’t do. There’s only so much upskilling, rebalancing, and governance that can happen. This won’t go down all pretty and dandy. Survival is selective.

So, back to the question: how do we respond? First worth refreshing our minds with a couple of concepts.

  1. Kurzweil’s Law of Accelerating Returns: human progress moves quicker as time goes on. You can Google the math but in a nutshell, the line of argument suggests that 21st century will achieve 1,000 times the progress of the 20th century.
  2. Moore’s law: the world’s maximum computing power doubles approximately every two years.
  3. Data growth rate is nothing less mind-boggling. PCMag claimed in 2018 that 90% of all data in existence today was created in the past two years. There’s a paper published by a lab raising concern about AGI running out of “HQ language training data” by 2026. I can’t speak for AGI but I don’t think ANIs will run into that problem as 1) data is personal, fluid, and contextual, and 2) ANI could derive more value from proprietary data. Scaling is perhaps also not the only bottleneck if we set the goals to be accuracy and usefulness.
  4. Deterministic and probabilistic thinking, computing, and systems. I’ll go more into this later.
  5. Workism, Taylorism, capitalism, education. We are complicit in breeding, molding, and squeezing humanity out of ourselves, from school till the workplace for the sake of scaling, replicability, and predictability. Processes, standards, uniformity. Human resources.

With those five things in mind: the world is changing faster than we will be able to comfortably keep up. What mindset shifts do we need to practice? What personal and systemic changes do we need to make?

Let’s start here: What are computers good at?

Literally: compute.

Expanding it a bit: recalling, scaling, and computing.

Let’s take a bit of detour and bring back the deterministic and probabilistic lens we touched upon earlier. What is a deterministic lens and what is a probabilistic lens?

Deterministic is when, given the same input, you expect to receive the same output all the time. Probabilistic is when you can afford some variation. It’s easiest to think of human activities in these terms: deterministic is order-following and probabilistic is improvising and filling in the gaps e.g. driving, reading, and listening.

Chat GPT, Stable Diffusion, and the cluster of generative AI apps we know are made possible by probabilistic computing. It operates based on statistics of how humans usually operate — derived from the human corpus, human-generated text it was fed with. It doesn’t need to understand what it was fed with and what it generates. // Philosoraptech has entered the chat: What is the purpose of understanding anyway? While we’re on it, let’s also ask: what are the functions of keeping something in our memory? Hold this line of questioning for future posts, it will come in handy.

Before the dawn of data science, the majority of software created for mainstream use is deterministic in nature. Rigid algorithms, rule-based systems. The more predictable the better. Probabilistic systems, on the other hand, are more commonly implemented in biology and robotics.

Many of the methodologies we use to run the world are also based on deterministic models. Manufacturing, project management, military. One of the first-order effects is that we also train ourselves to be machine-like. We craft exact syntaxes, memorise rules, and define processes for ourselves. We also found a way to find beauty in this activity and interpret this as an art of logic and discipline.

And there is nothing wrong with this. There are many contexts where predictable outcomes are imperative.

Deterministic activities were the well-accepted domain of computers and business. Consciously or not, up to this point there is a sense of safety where humans perceive that probabilistic activities are still a safe territory that computers will not touch or be able to deliver on reliably.

I said earlier that doctors will not be replaced until we are able to mathematically unbundle, represent, and model the different functions of a doctor. The keyword here is mathematically. Any activity we can represent and model mathematically can technically be automated. Whether we should, who will do it, and when, are inevitable questions yet are different issues.

Once we do, what is left of a doctor? Or any role, really. What is the essence of a human? What makes human, human? And, how does it matter?

What is left for us to do? What is not worth competing on? What qualities should we double down on? What can’t machines do (yet)? What parts of your job and life are deterministic and probabilistic? In what ways?

Let’s see what are the common arguments in the debate of human vs machine.

Humans are creative? Creativity can be defined in many different ways. The act of creating, the act of mashing different inputs to create something new, the act of improvising. These are essentially probabilistic acts, or at least can be probabilistically represented.

Artists claim that AI-generated arts are not arts, they lack the “heart”. But I call it bullcrap. People acknowledge the weirdest contemporary shit as art. People get moved by anything they can attach meaning to. It’s only a matter of time before AI-generated art gets acknowledged as “legit art”, by mistake or not.

The legal and ethical concerns of the training data are different yet important and truly interesting matters, though. Aren’t computers merely emulating what artists, craftspeople, and experts have been doing? Getting inspired by other artists’ styles, learning chess moves, and standing on the shoulders of giants. Is the real issue because they are not being compensated and credited for their work? Is this debate really a question about power and validation?

Human can embody, perceive, and adapt to physical spaces. I ship Atlas and Chat GPT. They’re canon in some future multiverses as far as I am concerned. Also, they can dance and parkour better than most people.

Human is compassionate. OK, this is one area I can’t argue against. I cannot imagine feeling the benefit from an emulated empathy. Standup comedy and most performance arts are also mostly resilient in my view.

Humans can improvise? When you improvise, you react by drawing upon your past experiences. You are running a probabilistic calculation in your body. When can we claim that our improvisation is more noble and legitimate than a single AI system is?

And let me ask you this: how often do you feel like merely a robot at work and how often do you feel the freedom of human agency? Haven’t we been training ourselves into machines and machines into humans? Are we obediently standing in line for our robotomy?

Human is unpredictable, incomprehensible, and inconsistent (fortunately for social scientists and people-watcher like myself, and unfortunately for managers with business KPIs and wet dream of full control, predictability, and infinite growth). Hence, I don’t blame Chat GPT for being inconsistent and senseless sometimes. I can’t even make sense of myself most days. I contradict myself, riddled with biases and dissonance. My words, thoughts, feelings, and actions are often inconsistent. Let alone the model. It is barely someone, it is everyone, everywhere, all at once.

If we can etch & train discipline & logic onto ourselves through the intellectual and rational realm, living is still a probabilistic act. Perception is probabilistic. Biology is largely probabilistic. Seeing, interpreting, sensing are probabilistic activities. I’d even argue that our sense of identity is probabilistic. It was mostly genetic, then they are subject to change and evolve by the beliefs we hold, life events, and other nurture aspects.

What is static, unique, and irreplaceable about us then? Consciousness? Ahh I don’t have the wherewithal yet to go into the discussion of consciousness ;)

Tacit knowledge. Expert intuition. Aha, I think we’re getting on something here.

The training data so far are explicit knowledge. The machines are currently feeding off our final output. They lack our behind-the-scenes, a real fundamental piece of our psyche.

We have a model of a world, the context, and all the subtle factors that go into the final product. We know how many drafts/shots/takes/iterations we go through before we publish the final product. We know how much learning, synthesising, and reasoning go into each piece. This process is valuable and will never be perfectly captured, expressed, or represented as data. Most of this will never be on the internet/any system, it’s only ever in your head.

There are only so many embeddings we can engineer, fine-tuning we can do, and compute we can optimise. One key bottleneck will be the amount of delicious tacit knowledge trapped in people’s heads we can make explicit — structured or unstructured.

For what it’s worth, the GPT has an Instruct tuning to teach the model our thought process, going step-by-step in our reasoning. But speaking from experience, it’s not easy to get people to run in verbose mode and capture the output. Not only from the lack of incentives to do so but also the inexistence of an easy interface to do so. To be fair, there are many low-hanging fruits in this area before even getting into these latent data. We have wealth of data still trapped in proprietary pockets.

This is also where my main interest lies: the transformation processes between tacit to explicit knowledge. I suspect/hope biotech will potentially play a big role if there is any breakthrough in this.

This can also go both ways. Either we continue down the bandwagon of self-expression, embrace Augmented Intelligence as the alternative abbreviation of AI, or we started defending ourselves by guarding our tacit knowledge and dying with it.

What do you think? Is it a good idea to fill this liminal space? How can we make this a win-win?

Closing thoughts

OK, let’s wrap up. I think that’s enough of a trunk to plant here. I’ll attach more branches and leaves as I gather and share future loose thoughts on AI.

I started this as some kind of throat-clearing and mental pipe-unclogging but hope you find this useful to orient in some ways, spark your curiosity, and tingle your spirit to build.

I have a lot more questions to think through and talk about as I shared at the beginning of the post. I’ve only briefly gone into “what makes human, human?”. I think I will go into “what aspects of knowledge work are worth investing in if the powerful AI is here to stay?” next.

As you’ve gotten this far, let me thank you by dumping on 3 more points, haha!

First, an important factor in technological disruption is accessibility — an aspect best captured in the saying “the future is already here, it’s just not well distributed”. Sure they can automate/replace/outsource/offload 74% of “a doctor” in this part of the world but a “89% doctor” still exists somewhere out there, perhaps next door.

There are many moving pieces in the value chain to achieve different levels of technological edge. Who will control or own the distribution and who will be able to get a piece? Which piece? Your technological edge is someone else’s technological disruption. Are those 26% and 11% an asset or a liability? It depends.

Second, I remembered the author and researcher of this book “The Second Machine Age” said in a talk

Suppose that Silicon Valley fell into the ocean, the guys at Deep Mind decided to sit back and stop working, and we had no more progress — just the technologies that we have today. If companies started implementing them like the cucumber guy did, we will have so many economic applications. They’ll have a first-order effect on GDP on jobs and work.

The state of implementation is so far behind the state of the frontier of the technology that we will have a couple of decades just implementing what we already have in place and that could be millions of jobs.

We’re only here at the beginning of the euphoria where the general public and enthusiasts joined in with those in the frontiers and start building. We will see more and more products for use cases where probabilistic models fit better.

Third, I expect there is a sneaking behemoth of an ecosystem to arise around personal data. I mean this in a definition larger than our conventional interpretation of it, of privacy and ownership of the data we produce on different platforms. What does personalised-everything-on-steroid look like? Tools to capture, model, own, and augment your exhaust, your thoughts, your life stats, your style, your beliefs, your knowledge? You-as-a-service? Cyborgs? ME-taverse?

Hit me up on the comments and let’s chat. I haven’t seen this level of good vibe and energy around tech for a while. I believe we are about to witness a substantial flood of interesting changes in knowledge-work. I should be worried as learning, synthesising, and writing are couple of big use cases that this LLM already seems to be able to cover, but I am excited to jump off the cliff and put my humanity to test.

Note: In case you’re wondering, yes, I am pushing The Business of Data series onto backlog as I have lost the momentum. I realised I haven’t fallen back in love with that side of the world just yet. I have spent the past 3 months beating myself up for not being consistent but hey I’ll embrace my humanity and pivot.

Originally published at Proses.ID.

--

--

Theresia Tanzil

This is where I ask questions and talk to myself | Backend web dev, web scraping, Robotics Process Automation | Blogs at http://proses.id