Tip - If you are using a phone, set the "Desktop Site" option in your browser   

2025-04-07

We have featured a number of articles on the topic of AI, notably as used by Martin Geddes to formulate his arguments more comprehensively than he would have been able to do unaided in the same time-frame.

Today we have another contribution to the topic, this time from Cyntha of FallCabal fame, whose approach is from a different perspective

Can both be right?

As always, it depends ...  

If we recognise the limitations of AI, and use it accordingly for its analysis and presentation skills, then perhaps it can excel, yet if we expect it to exercise human judgement, we may be disappointed. It isn't a living breathing being with hard-fought life experience to match.

We "know" things because we have been there, done that, and got the T-shirt.

Perhaps AI may get there in it's own style eventually, but at present it's hard to see how it could.

Yes, it may have been "trained" on a great deal of human output, some of which will be inconsistent with other parts of its input, so whose judgement will it use to decide which input to disregard?

And can we ever find a human being that exhibits the exact same set of views as another human being? Will any two AIs prove to be any more consistent? Perhaps they may be, but maybe they were trained on a common set of data? Does that mean that they would be correct?

And where does that leave people such as Clif High who believe that the universe which we inhabit is a very long way from being a disinterested and ultimately predictable entity?

"Note that the ontology, the base of the ontological paradigm, participates in our reality, in our lives, at an active level. People such as myself are placed into this Matterium where the experiences of our body’s Life are such that we are are formed, and tempered for our purpose here. We are to be change agents"

Could / would the "ontology" similarly influence the "thinking" of an AI?

I only ask because I would like to know.