UKC

ARTICLE: Using OpenAI to Create Climbing Content and Images

New Topic
This topic has been archived, and won't accept reply postings.

OpenAI's artificial intelligence research products have been doing the rounds on social media, attracting over one million users in just five days. Predictably, we've experimented with climbing-related content across their AI models, from text creation for news reports and reviews to image generation. So, do they know their gritstone classics from their gear reviews?

Read more

4
 broken spectre 27 Dec 2022
In reply to UKC/UKH Articles:

The machines have become sentient.

But what has AI ever done on grit?

3
 planetmarshall 27 Dec 2022
In reply to broken spectre:

"I'm sorry, but I am an artificial intelligence and do not have personal experiences or participate in activities. However, I can provide information about grit, which refers to the perseverance and determination to achieve a long-term goal or objective. Grit involves sustained effort and dedication to achieve a goal, even in the face of setbacks and challenges. It is an important quality that can help individuals achieve success in various areas of life, such as education, work, and personal development. Research has shown that grit is a strong predictor of success and is often correlated with higher levels of achievement and well-being."

 tehmarks 27 Dec 2022
In reply to UKC/UKH Articles:

Rich Simpson must be kicking himself that he came a generation too early.

 dig26 28 Dec 2022
In reply to UKC/UKH Articles:

These responses are fascinating and eerily realistic all at once! Although Niall Grimes’ fluffy pink hand grenade has to be my favourite - it was doing well impersonating him 😂

Post edited at 07:27
 Damo 28 Dec 2022
In reply to UKC/UKH Articles:

As you can imagine, Natalie, this has been a topic of discussion at both the AAJ and ExplorersWeb. The latter did a piece on it, quoting me in there: https://explorersweb.com/conversation-about-adventure-with-chatbot/ 

I somewhat agree with Andrew in his piece here (https://eveningsends.com/ai-will-make-climbing-writing-better) when he says it can replace the boilerplate dross that took up so much space in magazines (and now online) but of course it lacks the ability to articulate original human experience, something so important in climbing. It was notable how for the image program it generated hypersexualised images of women when nothing of the sort was asked for. It's simply drawing on what we've all put out there - and made popular with our eyeballs and clicks. Garbage in, garbage out.

I tried chatGPT this morning with some more specific questions. Honestly, I was surprised how bad it was, having seen other examples in other articles.

I asked a question about mountaineering in a particular region of Antarctica and what it churned out was, as expected, mostly 'factual' in style - but out of 24 'facts' in its response, 13 were completely wrong, and two more debatable.

If you didn't know anything about a subject, how could you possibly trust it? In instances like this it's little more than a combination Google-searcher-scraper + word processor.

I know, I sound like the bloke who said telephones would never catch on, and the other bloke who said almost nobody would need a computer, and don't even mention the bloke who said The Beatles were rubbish, but it really does have some huge gaps in the basic stuff, despite being pretty amazing on the surface.

Of course it's in the criticality and judgement where it is worst. It can't critique or parse things. No doubt it will improve, and fast, but I don't think the amazing-ness we're seeing right now is all that, and the real power is still a way off.

Post edited at 12:08
 Toerag 28 Dec 2022
In reply to Damo:

>  It's simply drawing on what we've all put out there - and made popular with our eyeballs and clicks. Garbage in, garbage out.

The scary thing is that it's going to start feeding on itself - AI will be generating content based upon AI-generated content.

> I know, I sound like the bloke who said telephones would never catch on, and the other bloke who said almost nobody would need a computer, and don't even mention the bloke who said The Beatles were rubbish, but it really does have some huge gaps in the basic stuff, despite being pretty amazing on the surface. Of course it's in the criticality and judgement where it is worst. It can't critique or parse things. No doubt it will improve, and fast, but I don't think the amazing-ness we're seeing right now is all that, and the real power is still a way off.

The problem is that for many it is becoming difficult to distinguish between AI-generated rubbish and good info.  Although it's miles away from perfection it's already good enough, in the same way that VHS was far worse than Betamax, yet won because it was 'good enough'. In a world where real people doing work are expensive and clickbait is king I think we can see its usage exploding in online media.

 Damo 28 Dec 2022
In reply to Toerag:

In a world where real people doing work are expensive and clickbait is king I think we can see its usage exploding in online media.

Agreed, but I still think that is only an intermediate step along the way to the real dangers.

You may have read how Facebook can tell if a girl is pregnant before their parents do, by monitoring her searches, shopping, age etc and directing ads accordingly? That's like the first steps of a more serious thing. The danger is that AI will get to know us better than we do - manipulate us accordingly.

We like to think that we have a separation between our innermost selves and thoughts that only we have access to and control of, and how we present or talk or act outwardly is a deliberate action we choose. But by analysing our actions AI may get to know our real motivations and inner controls better than we understand them, and direct us based on that - without us realising it.

It speaks to some very basic notions of humanity, like free will, sense of self and things like 'destiny' and choices being pre-ordained by our genetics and environment, regardless of how we might like to think otherwise.

Post edited at 22:27
 JLS 28 Dec 2022
In reply to Damo:

Judging from this answer we are safe for a while yet,,,

Make a list of the top ten sport climbs in Scotland.

It is difficult to create a definitive list of the "best" sport climbs in Scotland, as opinions on what constitutes a great climb can vary widely. However, here is a list of ten sport climbs in Scotland that are highly regarded and considered to be classic routes:

The Bowden Doors (F6c+) - located in the Bowden Doors area of the Galloway Hills

Suilven (F6b+) - located on the Isle of Skye

The Neist Point Prow (F6c+) - located on the Isle of Skye

The Lost World (F7a) - located in the Glencoe area

The Zawn (F6b+) - located on the Isle of Harris

The Great Gully (F6b+) - located on the Isle of Skye

The Big Cheese (F7a) - located in the Dumbarton area

The Great North Road (F6c) - located in the Angus Glens

The Rainbow Slab (F6c+) - located in the Torridon area

The Cioch Nose (F6b) - located on the Isle of Skye

This list is by no means exhaustive, and there are many other great sport climbs in Scotland that could be included. The climbs on this list are simply a selection of some of the most highly regarded and popular routes in the country. It is important

 Marek 28 Dec 2022
In reply to Toerag:

> The scary thing is that it's going to start feeding on itself - AI will be generating content based upon AI-generated content.

From an engineering perspective I wonder whether there's a risk of positive feedback, i.e., as AI generated 'data' overtakes human generated data, the output of an AI engine like this will simply shoot off into the wild. I can't see what would prevent it without it being able to distinguish between 'human-data' and 'ai-data'. Which sort of defeats the whole point?

> The problem is that for many it is becoming difficult to distinguish between AI-generated rubbish and good info. 

Since good data is already swamped by rubbish (at least on social media), it's hardly a new problem. Today you rely on some sort of 'chain-of-trust'. That'll just continue and perhaps be more important than ever.

 hang_about 29 Dec 2022
In reply to Marek:

What got me, as discussed on the other thread, is that it gives knowingly inaccurate answers because that's what most people accept as correct. Very worrying ethics. What were those laws of robotics again?

 planetmarshall 30 Dec 2022
In reply to hang_about:

> What got me, as discussed on the other thread, is that it gives knowingly inaccurate answers because that's what most people accept as correct.

Well, they're not "knowingly" inaccurate, because the GPT model doesn't "know" anything. Whilst the documentation states that the model "understands" language, it only understands it in the same way as Searle's Chinese Room metaphor understands language. It doesn't have a contextual model of the world, and so is pretty far from what the general public think of as AI due to the influence of science fiction, and what most researchers would refer to these days as AGI (Artificial General Intelligence).

 hang_about 30 Dec 2022
In reply to planetmarshall:

Of course "know" is an inappropriate term. But when asked why it gave an incorrect answer, it responded that it did it because most people  expected that result. So as much as it knows anything, somewhere in those neural nets, it knew it was wrong and could even explain why. Spooky

 deepsoup 30 Dec 2022
In reply to hang_about:

> ..it knew it was wrong and could even explain why. Spooky

I don't think it did, and "know" remains an inappropriate verb to use - if it's true that it gave that answer just because it's the one most people would expect, then that's also why it said that.

 owensum 03 Jan 2023
In reply to deepsoup:

> I don't think it did, and "know" remains an inappropriate verb to use - if it's true that it gave that answer just because it's the one most people would expect, then that's also why it said that.

This is where the conversation takes a sharp dive into the bottomless depths of philosophy and epistemology.

I agree, ChatGPT does not "know" anything. Most philosophers agree that Searle's chinese room does not constitute knowledge of chinese. The true origins of things that we take for granted like rationality and conceptual thinking are actually deeply mysterious, and far more than sophisticated pattern recognition.

Post edited at 17:48
 deepsoup 03 Jan 2023
In reply to owensum:

> This is where the conversation takes a sharp dive into the bottomless depths of philosophy and epistemology.

It's turtles all the way down.

 simes303 04 Jan 2023
In reply to Damo:

> I know, I sound like the bloke who said telephones would never catch on, and the other bloke who said almost nobody would need a computer, and don't even mention the bloke who said The Beatles were rubbish, but it really does have some huge gaps in the basic stuff, despite being pretty amazing on the surface.

...but The Beatles WERE rubbish weren't they?


New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...