UKC

Personification of Algorithms

New Topic
This topic has been archived, and won't accept reply postings.
 Xharlie 15 Mar 2023

Has anyone any recommendations for great Sci-Fi that adresses the topic of the personification of algorithms or questions whether humans should or should not try their damnedest to confuse the boundary between true sentience and systems?

The mentats from Dune spring to mind but they were counter to the entire concept of computation – not just the blurring of this boundary. Turing hypothesised a test to distinguish an automaton from a human, too, but that was a thought experiment and did not ask whether either the ability to distinguish or the ability to prove indistinguishable were worthy aims.

----

It seems that certain actors in this world, today, are dead set on convincing us (in the developed world, at least) that their very-much-not-sentient algorithms are either on the level of God or, in the meeker cases, at least as good as humans.

I have some very strong opinions about this – and some theories as to why they have an interest in doing this, since it seems unproductive outside of the laboratory – and I have done quite a bit of reading (and first-hand observation) on their tactics and strategies and campaigns with which they operate.

Included in this group are also Facebook, Google and Microsoft and ever aspirant in a certain "valley", surely. This seems like a credible threat.

... and I'd like to see how sci-fi has treated it for aproaching such eventualities seems like the very Reason for good sci-fi to be written.

1
 broken spectre 15 Mar 2023
In reply to Xharlie:

Blade Runner 2049, Officer K's girlfriend is an algorithm.

2001 A Space Odyssey, murderous HAL-9000 is an algorithm.

 a crap climber 15 Mar 2023
In reply to Xharlie:

I write algorithms for a living, I often find myself swearing at them and calling them rude names. Does that count as personifying them?

 wintertree 15 Mar 2023
In reply to Xharlie:

Heinlein’s “The Moon is a Harsh Mistress” explores this from a human interaction perspective.  It’s one of his best books, and another theme from it - the importance of militarily controlling the high ground (space) is likely to be a much more critical conversation than AI in the next 5 years.

For what it’s worth, IMO current AI efforts are in large part the emperor’s new clothes.  Look beyond the marketing - overt and covert - and the carefully framed and primed demos and it’s the emperor’s new clothes as far as real paradigm shifts are concerned. 

1
 mountainbagger 15 Mar 2023
In reply to Xharlie:

Not so much an example of personification, but certainly a commentary on what could happen if AI is omnipresent: Skynet (The Terminator).

Personification, one that springs to mind that I thought was pretty good: I, Robot

 Jimbo C 15 Mar 2023
In reply to wintertree:

> For what it’s worth, IMO current AI efforts are in large part the emperor’s new clothes.

I agree. What is currently being marketed as AI is not intelligence, it is finding patterns in large data sets, but that doesn't sound as sexy. 

 wintertree 15 Mar 2023
In reply to Jimbo C:

> I agree. What is currently being marketed as AI is not intelligence, it is finding patterns in large data sets, but that doesn't sound as sexy. 

… and, to be fair, the leading systems have a very powerful ability to find those patterns, and if you look at them as a turbo-charged version of a search engine, and that’s a genuinely useful tool.

However, question them carefully and it’s clear there’s no cognition, no inquisition, no spark of creativity behind what they do.  It’ll be a shame if the over promises from the marketing departments end up hurting the research if the bubbles bursts before the claimed capabilities actually happen.  FWIW I don’t think any purely deterministic system that - with a couple of mathematical transformations - can be boiled down to one (admittedly giant) set of polynomials turning inputs into outputs will ever deliver.

 freeflyer 15 Mar 2023
In reply to Xharlie:

Good thread. I must re-read the Dune series.

I disagree with your view that the big AI players want to portray their products as God-like, although like populists, they will say absolutely anything to further their pursuit of profit. In the current situation, that appears to involve downplaying the quite spectacular results they are currently getting from statistical algorithms and big data, and desperately trying to lower expectations.

No doubt a good few people will be along to suggest the Corporation Wars series by Macleod, which I am about halfway through, and hoping that something interesting will happen soon.

I would recommend the James Blish Cities in Flight books. The City Fathers (computers that run the spaceships) are basically ChatGPT in about 50 years time, and written 50 years ago. They can shoot the human city leaders if they fail to perform their jobs. This I like!

OP Xharlie 15 Mar 2023
In reply to wintertree:

> For what it’s worth, IMO current AI efforts are in large part the emperor’s new clothes.  Look beyond the marketing - overt and covert - and the carefully framed and primed demos and it’s the emperor’s new clothes as far as real paradigm shifts are concerned. 

I agree. I think that, during the craze, the marketing may do irreparable damage to the concept of credibility, however, or the utility of recorded and written record.

What happens when everything written, recorded, filmed, drawn or realised in any medium must be approached as if it is deliberately fakery?

One could say that it was ever thus but, largely, institutions did have mechanisms to reduce or call out the fakery and nobody was evangelising a machine for the ready-production of fakes as if it were akin to God 2.0.

Now: we see the democratisation and automation of cheap fake content, an overt effort to glorify how "human" that content suposedly is, hugely significant names betting the house on the profitability of the machine itself, along with a crazy few crediting God 2.0 as some deity so mystical that we should proclaim them to be privileged priests and afford them exclusivity, matched by an equally crazy few denigrating humans and reducing all humanity to the level of the "stochastic parrot" in order that the algorithm might win the unfair fight.

It is mildly concerning.

 wintertree 15 Mar 2023
In reply to Xharlie:

> What happens when everything written, recorded, filmed, drawn or realised in any medium must be approached as if it is deliberately fakery? 

Or flip it; book authors, video game creators, TV content producers, and others are using generative AI to produce art for their commercial works.  But that “AI” is just mishmashing from the human creative inputs it was fed, with no spark of creativity.  You could look at it as “asset laundering” - it breaks the audit trail by which the creative individuals should be getting paid for their unique contributions.

> It is mildly concerning.

Yup, and also concerning are the people drawn in to seeing the emperor’s new clothes.

Post edited at 22:33
OP Xharlie 15 Mar 2023
In reply to wintertree:

> > It is mildly concerning.

> Yup, and also concerning are the people drawn in to seeing the emperor’s new clothes.

I think you just hit the nail on the head: it is the behaviour of the humans, faced with the advent of these algorithms, that is concerning – not the existence or capabilities of the large models, themselves.

I think that we are all agreed that the algorithms are being sold as something they are not – although I am not certain I am succeeding in communicating so, because it is late at night.

 owennewcastle 15 Mar 2023
In reply to Xharlie:

Not fiction and I found it a hard read but this discusses one perspective of the how AI may be developed and what the dangers may be. The "control problem" concept is interesting particulalry as controls may be skipped by competing participants in the race to be the first to develop AI:

https://www.goodreads.com/book/show/20527133-superintelligence

For fiction (film), and closely aligned with Nick Bostroms warnings, I quite liked Ex Machina.

 Luke90 16 Mar 2023
In reply to Xharlie:

> I think you just hit the nail on the head: it is the behaviour of the humans, faced with the advent of these algorithms, that is concerning – not the existence or capabilities of the large models, themselves.

Arguably always the case with any tech, isn't it? Whether new or old. "Guns don't kill people" etc.

> I think that we are all agreed that the algorithms are being sold as something they are not – although I am not certain I am succeeding in communicating so, because it is late at night.

The impression I've got is that the actual researchers and companies are being fairly straight about what they've produced and are leaving it to the journalists and commentators to bring the breathless hype. Because a lot of what these things can do, whether or not we allow the term "AI", is pretty remarkable.

I think the Emperor is actually wearing some clothes, in the sense that the output of a lot of these AI tools could be genuinely impactful, for better or worse.

 Petrafied 16 Mar 2023
In reply to wintertree:

In reply to wintertree:

   >FWIW I don’t think any purely deterministic system that - with a couple of mathematical transformations - can be boiled down to one (admittedly giant) set of polynomials turning inputs into outputs will ever deliver.

But ANNs aren't purely deterministic (other than in very simple architectures, such as SVM based or statically weighted feed-forward networks).  For example, the CNN architecturs I've been working with have a myriad of ways of introducing nondeterministic behaviour: dropout strategies, stochastic optimisation algorithms, regularisation options, early stopping mechanisms etc. all mean that the same input generates different (if usually similar) results.  Then there's use of genetic algorithms to perform optimisation, or architectures that auto-tune the hyperpameters that add a whole new possibility for randomness to occur.

Also, I've say that ANNs can models much more complex functions than polynomials by approximating non-linear mappings between inputs and outputs.  That they can do this is where, in my opinion, much of their strength of "deep learning" models comes from.

Not sure I'd describe the processes that go on in a complex neural network as "a couple of mathematical transformations".  For the Noddy ones in text books maybe, not in the real-world ones that do the things you're being somewhat dismissive about.

Oh - and I which people wouldn't fixate on the "intelligence" part of "AI" and pay more attention to the "artificial" bit. I think relatively few AI researchers are working on producing systems that cognitively mimic humans.  I preferred it when AI was better known as "knowledge based systems" and the current crop, including LLMs are better described as "machine learning systems", but I guess that gets less PR kudos.

Post edited at 07:04
1
 wercat 16 Mar 2023
In reply to Xharlie:

I rather liked Angel One and Angel Two as they developed over the two Earthsearch Radio Series back in the early 80s - there is a general theme about whether what they call "free will" computers can be trusted

Sometimes turn up on Radio 4 Extra

 wintertree 16 Mar 2023
In reply to Petrafied:

> But ANNs aren't purely deterministic

What’s your source of “noise”?  The stuff I’ve seen is PRNGs.  They’re deterministic. 

> Not sure I'd describe the processes that go on in a complex neural network as "a couple of mathematical transformations".

> Also, I've say that ANNs can models much more complex functions than polynomials by approximating non-linear mappings between inputs and outputs.

Disagree with both points.  Fundamentally, both non-linear responses and convolutional layers can be boiled down to polynomials.  Taylor’s theorem.  Recurrence can be flattened out and multiple layers can be reduced to one. It’s all in the maths.

They all boil down to a single set of polynomials transforming inputs to outputs.  All the bells and whistles are genuine and useful innovations, but where they add capability is to the ability of humans to better structure the systems for training and computationally efficient execution, but at the end of the day they’re still a precise mathematical transformation of inputs to outputs.

edit: I don’t intend to dismiss the genuine and many achievements but the hype/presentation/interpretation going on around them.

> Then there's use of genetic algorithms to perform optimisation, or architectures that auto-tune the hyperpameters that add a whole new possibility for randomness to occur.

To be clear, these us deterministic pseudorandom ness to make different designs and trainings, they don’t add randomness to the execution of any specific, trained network.

Post edited at 08:38
1
 deepsoup 16 Mar 2023
In reply to Xharlie:

> Has anyone any recommendations for great Sci-Fi that adresses the topic of the personification of algorithms or questions whether humans should or should not try their damnedest to confuse the boundary between true sentience and systems?

Hopefully this isn't a spoiler - the cosmic villains in Alistair Reynolds's 'Revelation Space' series are algorithms, he calls them the 'inhibitors'.

They're machines designed to suppress intelligent life in the galaxy (including themselves) - they have traps laid throughout the galaxy to detect when a civilisation has become advanced enough to start spreading out into interstellar space.  Once a civilisation gets their attention they attack to destroy it, upgrading themselves as they go to the minimum level of intelligence/sophistication necessary to get the job done and then dismantling themselves again afterwards back down to the level of a sort of cosmic mouse trap.

He does big space-opera type sci-fi without faster than light travel or communication, so interstellar travel takes centuries.  Consequently, rather than travelling themselves or communicating in 'real time' and waiting decades or centuries for a reply, people tend to send AI replicas of themselves to meetings.  A 'beta-level' is an AI based on a the most detailed non-destructive scan of your brain possible, and there's quite a lot of debate around the ethics of that and whether or not the resulting AI is sentient and conscious or just a clever simulation designed to reflect your personality, react as you would and negotiate on your behalf.  (They're called 'beta-level' because an 'alpha-level' would require a destructive scan that would kill you - and again there's a debate as to whether that would mean effectively uploading your consciousness into the computer or just committing an elaborate suicide while creating a slightly more sophisticated than average 'beta'.)

In reply to wintertree:

yo wintertree, didn't expect to see this on a Thursday morning😀

I gave evidence to a parliamentary select committee on research and innovation a while back. When the chair asked why fundamental invention seemed to becoming more rare, I had to remind them that theres no 'mythical well of mathematics out there'. Basically we've got the maths that we've got, and the future is integration of systems, parallellism and speed.

The 1992 version of me would recognise everything that's bolted together in 'AI', probably the biggest step forward has been adapting graphics cards to being matrix machines. I've been using GAs since we developed MOGA back then, a class of algorithms which search by exploration and exploitation (theres nothing else fundamentally in search and optimisation) and obey the law of 'no free lunch', although the GA has shown itself to be the best general purpose tool. Best described as 'biologically inspired' rather than AI

It would be better if the 'tech bros' concentrated on broadcasting the capabilities of the work they are doing rather than muddying it with AI BS? I think the development work across automation and areas like medical image pattern recognition are stunning.

Any arbitary nonlinear function can be approximated by an ANN, and can be approximated by Taylor series....yes, back when I used to teach, we would work through application examples to examine the parallels between the two. 

Deterministic is a tricky one. When I suggested to a Rolls-Royce team that we swap out a gain-scheduled PID controller, and swap in my Fuzzy Logic controller to their Full Authority Digital Engine Controller, I was nearly escorted off site! They would only work with a deterministic controller on safety critical systems. I later proved that Fuzzy was deterministic for them (gain scheduled hyperplanes), and chatted about it with Lofti Zadeh (inventor of FL) over a pizza one day, and he cut me off early with 'it is deterministic, as most systems provably are with series analysis. btw Rolls still didn't use it!

OP Xharlie 16 Mar 2023
In reply to wintertree:

Indeed. In fact, these models are so deterministic that they can even be executed on disparate hardware and a trained model, given the same seed for the PRNG and the same input vector, will produce the same output – even in fp16.

What is also never mentioned in the press is that these names like "ChatGPT" actually refer to pipelines, not single algorithms. The most glaring example is the censorship of topics which carry any kind of liability.

This is easily seen in the Stable Diffusion code-base: if you ask Stable Diffusion to generate an inappropriate image, the algorithm will be quite happy to do so but another stage of the pipeline will flag that image and prevent it from being delivered as output. (The image generator just generates noise and then iteratively processes it to converge to something we happen to percieve to be a picture – it has no concept of what those pixels mean, beyond the fact that they, once transformed by the model, most accurately match a constellation of numerical weights derived from the input prompt – and the censorship phase simply classifies the result as ok or not-ok, according to training: mostly supervised learning.)

I am certain that, whenever these LLM's yield response like, "I am an algorithm and cannot have an opinion on...", that is a similar mechanism at work.

And, yet, there are many people who percieve those very responses to represent some kind of greater self-awareness.

Why is that? Why do these people need the algorithm to be self-aware?

On the question of whether the big names are culpable for the personification or deification: I do acknowledge the point that, right now, it is mostly on the part of the pundits and press but the corporations are certainly pandering to it.

The researchers are quite clear that they have not created true A.G.I. but the companies are giving these things names, inviting users to "chat" to them, declaring that image-generation (via convergence from noise...) is "dreaming", and the rest. This clearly invites the personification so I do consider that they have an interest in that taking place.

Again: my question is WHY?

(Actually: the censorship is another clear example. If OpenAI et al. were honest, ChatGPT et al. would simply refuse to reply or give an error if asked about a banned topic such as US politics. Instead, OpenAI have clearly invested effort into implementing not only the detection of banned topics but human-like responses to deflect them. Why invest that effort, if not to obscure the fact that OpenAI is an algorithm and not a human?)

So far, this is the best article I have found on the matter: https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots...

 Offwidth 16 Mar 2023
In reply to Xharlie:

A superb article so Im really glad my reaction to the first paragraph didn't stop me reading:

>"before ChatGPT began writing such perfectly decent college essays that some professors said, “Screw it, I’ll just stop grading” "

....only if the Prof was almost as dumb with their setting and assement as the AI.

I agree very much with this, given the algorithm moderation applied by big tech can easily be removed and be used to politicallly motivate in bad ways...worst case produce misinformation or even hate on a massive scale:

>We need strict liability for the technology’s creators, Dennett argues: “They should be held accountable. They should be sued. They should be put on record that if something they make is used to make counterfeit people, they will be held responsible. They’re on the verge, if they haven’t already done it, of creating very serious weapons of destruction against the stability and security of society. They should take that as seriously as the molecular biologists have taken the prospect of biological warfare or the atomic physicists have taken nuclear war.” This is the real code red. We need to “institute new attitudes, new laws, and spread them rapidly and remove the valorization of fooling people, the anthropomorphization,” he said. “We want smart machines, not artificial colleagues.”

Post edited at 11:27
OP Xharlie 16 Mar 2023
In reply to Offwidth:

I just found another decent one: https://www.theatlantic.com/technology/archive/2023/03/gpt4-arrival-human-a...

I particularly like the positivity in the closing paragraphs. I need that, today.

 Petrafied 16 Mar 2023
In reply to wintertree:

Good grief - that old chestnut.  Thought we'd got beyond that.  It's trivial to monitor the outside environment for "true" randomness if the rather complicated pseudorandom number generators used aren't enough for you.  Good luck, incidentally, predicting the outcome of a 192 layer Inception network on a 1000x1000 satellite image when fully loaded up with the hyperparameters tuned to use the various stochastic algorithms available to it, even if you do "slum" it using whatever pseudorandom number generator is available by default. 

Whatever your response is, I'll go along with and admit you're right - I simply don't have the energy to argue. 

2
 Jimbo C 16 Mar 2023
In reply to paul_in_cumbria:

> Deterministic is a tricky one.

I would agree that fuzzy logic is deterministic. In fact, isn't any task done by a digital computer deterministic by definition because the only thing that a digital computer can do is follow a set of pre-determined instructions.

It is tricky. I don't flat out disagree with the concept that humans are deterministic but I'm also a long way from agreeing. What humans have that digital computers don't is a body and a huge amount of input from our senses, and I'm 99.9% sure that those inputs cannot be pre-determined. Then there's our brains of course, which are analogue, but does anybody really know how our brains work?

I had an interesting chat with a lecturer friend who took part in a blind trial where they marked essays; some generated by 'AI' and some by humans. Apparently the AI ones started off as really good essays but rapidly lost the plot and it was very easy to identify them.

 wintertree 16 Mar 2023
In reply to Petrafied:

> It's trivial to monitor the outside environment for "true" randomness if the rather complicated pseudorandom number generators used aren't enough for you.

I'm not saying they're "not good enough for me".  I'm hi lighting that they are deterministic.

It's not "trivial" to generate true randomness.  It takes some effort, and the rate of entropy generation of computers is low; as in you could send it over a 1970s acoustic coupler low.  It's more commonly used to re-seed a PRNG so it doesn't give the same result every time you use it...

> Good luck, incidentally, predicting the outcome of a 192 layer Inception network on a 1000x1000 satellite image when fully loaded up with the hyperparameters tuned to use the various stochastic algorithms available to it, even if you do "slum" it using whatever pseudorandom number generator is available by default. 

I didn't say "easily predictable", I said "deterministic".  That is perhaps a critically important distinction to understanding the gulf  hi-lighted by myself and others on this thread between a marvellously capable and genuinely impressive recall engine and a single spark of creativity.

Those stochastic algorithms are also, by the way, deterministic.

 wintertree 16 Mar 2023
In reply to paul_in_cumbria:

I can sympathise with Roll's, and their caution is what's needed in spades as we integrated more modern algorithms in to medical imaging (for example).  

It's not really about the system being "deterministic" but  about it being deterministic in a way that can be readily broken down in to sub-systems whose determinism can be understood, translated in to mathematical and linguistic descriptions and tested robustly over all relevant parameters, so that the systems built from those blocks can reasonably be signed off on by their insurers and their consciences as they go in to safety critical use.

The modern stuff is an odd one as the training processes are very well understood but the actual doing-things processes they build are often highly opaque.  

> The 1992 version of me would recognise everything that's bolted together in 'AI', probably the biggest step forward has been adapting graphics cards to being matrix machines

I fully agree.  

> although the GA has shown itself to be the best general purpose tool. Best described as 'biologically inspired' rather than AI

An interesting time for optimisation algorithms; I've always been a fan of the Nelder-Mead simplex but there are many times a GA will beat it.  Big shake-ups on the cards if you look at the scaling roadmaps of some of the quantum computing efforts.  One thing quantum computers do very well is run optimisation algorithms.  I got a spam invite to a conference on Quantum AI in Drug Discovery a few months ago.  I despair for what press offices do with this stuff, I really do.

 planetmarshall 16 Mar 2023
In reply to Petrafied:

> But ANNs aren't purely deterministic (other than in very simple architectures, such as SVM based or statically weighted feed-forward networks).  For example, the CNN architecturs I've been working with have a myriad of ways of introducing nondeterministic behaviour: dropout strategies, stochastic optimisation algorithms, regularisation options, early stopping mechanisms etc. all mean that the same input generates different (if usually similar) results.  Then there's use of genetic algorithms to perform optimisation, or architectures that auto-tune the hyperpameters that add a whole new possibility for randomness to occur.

All of those things are deterministic. The only way to introduce non deterministic behaviour is to have it as an input, eg from a thermal noise source. For various reasons this is undesirable outside of cryptography applications as most of the time you want reproducible, but pseudo-random, results.

 DizzyVizion 16 Mar 2023
In reply to Xharlie:

Sentience is an algorithm. 

And what 'they' think they are up to is an evolutionary leap forward.

God is a concept. And a person thinking they are god is just another concept.

It actually is all as simple as that.

Post edited at 22:35
 wintertree 16 Mar 2023
In reply to DizzyVizion:

> Sentience is an algorithm. 

> […]

> It actually is all as simple as that.

Do you have a rational basis for saying that?

To date, science has produced zero hypothesis about the qualities of sentience that can be tested by a null hypothesis, and by extension no qualified claims can be made about the nature of sentience.

Sure, there’s a lot of confident sounding words in the literature, but none of them have P values to quantify their chance of being correct.

Research in “AI” is so far down a route hosted on deterministic binary logic that it’s hard to see it reaching parity with mammalian or avian wetware.

 DizzyVizion 16 Mar 2023
In reply to wintertree:

Basically my opinion is this-

That the current debate on this matter appears to be approaching a similar point to when a geocentric universe was replaced by a heliocentric one.

All systems are just that; systems. And the definitions of the words algorithm and sentient are pretty much interchangeable. And that to my thinking is no strange coincidence.

In reply to wintertree:

> Heinlein’s “The Moon is a Harsh Mistress” explores this from a human interaction perspective.  It’s one of his best books, and another theme from it - the importance of militarily controlling the high ground (space) is likely to be a much more critical conversation than AI in the next 5 years.

After meaning to read it for many years I finally got around to it last year and thought it mostly too dated albeit with a few bits that have stood up well.

 wercat 20 Mar 2023
In reply to wintertree:

Oi!

don't forget the Octopus!

 Offwidth 21 Mar 2023
In reply to Xharlie:

More info on the topic:

https://www.theguardian.com/technology/2023/mar/21/the-ai-tools-that-will-w...

On the plus side an ex student I'm in touch with is finding ChatGPT very useful to debug code.

 Blue Straggler 22 Mar 2023

> 2001 A Space Odyssey, murderous HAL-9000 is an algorithm. 

Is HAL 9000 actually murderous? I know the definition is broad and loose but I am not sure that HAL 9000 strictly fits even into those broad loose definitions. The astronauts involved might see this differently, I admit. But HAL 9000 is well defended by Dr Chandra later 

 wercat 23 Mar 2023
In reply to Blue Straggler:

it is worth remembering that HAL was supposed to be a blend of Heuristic and Algorithmic construction whence cometh his name


New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...