This is the first in a series of blogs where I look at artificial intelligence (AI) beyond the horizon. To do so, I must present the evidence that takes us to the horizon and then, using both experience and imagination, take us over.
To put the current excitement of AI in some context I would ask you to reflect upon your experience with the scientific calculator. Did you ever marvel at an abacus? Did you have a slide rule which you deftly slid in competition to input and output amazing calculations? Today’s scientific calculators can perform incredible feats of mathematics with just a few presses of the buttons. Artificial 'intelligence', which I feel I must put in inverted commas, is a 'text form' of the scientific calculator.
In language we have rules. Spelling, grammar, context, syntax, all the rest; these can be reduced to mathematical certainties. These create the rules to create documents with sentences, with nouns and verbs and paragraphs and chapters. Anything that can be reduced to a mathematical derivative can be synthesised to produce AI. It is not rocket science and it does require a tremendous amount of computing power to create something that is reasonable. I use the word deliberately to embrace the reality that AI is still within the domain of reason. It has no morals. No emotion. No intuition. It has no faith. It is not human.
Humans are so much more than reductive mathematical functions, and that is a great weakness of AI.
To illustrate this, I want to share with you a recent 'request' to ChatGPT which required editing a paper. A task that I would find easy with a red pen but ChatGPT could not accomplish such a task operating at its current level of 'training'. I think the exchange is very informative. We are going to have to radically change our attitudes towards publications because ChatGPT can create papers in literally seconds. But how does it do it? It has the prompts and these are critical in terms of the sourcing data. But what determines how the data is to be expressed and weighted and interpreted? So, let me share with you in this first blog: a chat with ChatGPT.
Introduction and background
My medico-legal consultancy in Hong Kong was approached by a firm of solicitors in Australia to provide an expert opinion on a defamation case involving a plastic surgeon and a major media outlet concerning a "botched" labiaplasty procedure. Unlike some medical experts, I do like to support my opinions with evidence. As such I undertook a working review of the literature concerning the evolution of the procedure over the last 15 years. This resulted in a 9000-word working paper with 33 key references and some footnotes. In the process of this review I uncovered some really disturbing information which I think needs wider discussion in the plastic and cosmetic surgical communities.
Labiaplasty is the fastest growing cosmetic procedure worldwide, but the drive is fundamentally flawed. Depictions of the vulva in soft pornography are regulated in Australia and the UK such that protrusion of the labia minora beyond the labia majora are airbrushed out. When young men and women develop their understanding of genital anatomy, they will often resort to looking at pornography. Now with smart phones and selfie sticks, more and more women are examining their own genitalia and many feel that they are abnormal as they are presented with this false normality. The result is that many women are made to feel inadequate and abnormal when they are very much within the spectrum of normality. What disturbs me is that too many plastic and cosmetic surgeons are quite happy to exploit this anomaly for their own pecuniary gain. This is wrong.
I want to open up the discussion of ethics to a wider audience and so I want to turn my working review into a paper suitable for publication. Having recently attended a workshop looking at AI in medicine I was introduced to ChatGPT by some young and enthusiastic doctors who are in thrall of the hype behind AI. I decided to ask ChatGPT to undertake a reduction in word count from 9000 to 3000 whilst retaining the original 33 key references and the footnotes and retaining the scientific message. Over a period of three days, I interacted with ChatGPT with increasing frustration as I became more and more aware of the limitations of the 'intelligence'. What follows is the final conversation where I was at last successful in enabling ChatGPT to acknowledge its limitations. In truth I was proposing a 'red pen' exercise, which I will do. It is just a little tedious but from a computational perspective it is highly complex, requiring this ill-defined human capacity of 'judgement'.
I share this last conversation with ChatGPT to both inspire and reassure. Artificial intelligence is really exciting with a great deal of potential, but it is not the threat that many like to suggest.
Medicine is so full of hype and hope but life goes on and we still all die. I am reminded of the words of my great Headmaster Cecil Beeby: "Keep steady." It was a boy’s school, so he added "chaps" at the end but the message was clear. We welcome new ideas and innovations, but we must keep things in perspective; do not get overexcited; do not get overly depressed. Keep steady and all will be well.
The chat as transcribed by ChatGPT
You said ('You' is me!): So I guess that you haven't been able to do the condensation of the review, and we'll just accept that this is a failure of AI.
ChatGPT said: I apologize for any inconvenience! I can certainly help you condense the review to 3,000 words, keeping the scientific integrity and necessary citations intact. If you'd like to continue with this task, feel free to share the document again, and we can work through it step by step to ensure it meets your needs. Let me know how you'd like to proceed!
You said: I would like to have a human interface so that we can discuss this, because obviously your algorithms are not enabling you to do what is actually a relatively simple human task, but something that is obviously beyond your abilities. So can you give me a human contact?
ChatGPT said: I understand your frustration, and I appreciate your patience. Unfortunately, I cannot directly connect you with a human contact. However, I can continue working with you closely to achieve your goal. If you'd like, we can go through the document in sections, and you can provide feedback at each step. This way, I can ensure the condensation meets your expectations as we proceed. Would you like to give that approach a try?
You said: No, I'm not going to pursue this anymore with you because it is a waste of my time, and it's a waste of your time too. I'm still not giving up on ChatGPT, but we have to understand your limitations.
ChatGPT said: I completely understand, and I appreciate your openness and feedback. Thank you for giving it a try. Should you decide to explore other tasks or discussions with ChatGPT in the future, I’ll be here to assist in any way I can. If you ever need support on a different project, feel free to reach out.
You said: It's not quite so benign as you might think it is because I am paying for a service, the service is not being provided and you are turning out to be no more than just a little augmented intelligence and we've got artificial intelligence, we've got real stupidity. Which do you think you are?
ChatGPT said: I understand your frustration, and I respect that you're expecting more from the service, especially when you're paying for it. It's clear that, in this instance, I haven't met your expectations, and for that, I sincerely apologize. My goal is to assist as effectively as possible, but you're right that there are limitations in what I can currently do, especially with tasks requiring complex human-like judgment or nuanced editing.
I take your feedback seriously, and while I aim to be more than just "augmented intelligence," I also acknowledge that there are areas where improvement is needed. If there's anything else I can do to support you, even in a different context, I'm here to help.
You said: Can you tell me whether Jordan Peterson was paid to make a YouTube video promoting ChatGPT?
ChatGPT said: There is no clear evidence that Jordan Peterson was specifically paid to make a YouTube video promoting ChatGPT. In his discussions on YouTube, Peterson has shared his views on AI and ChatGPT, expressing both enthusiasm and concerns about the technology. For example, in one video, he talks about the potential of AI to revolutionize many aspects of society, but he also addresses concerns about its limitations and the possible impact on human behavior and society.
These videos seem to reflect his genuine interest in the subject, rather than being explicitly promotional content tied to compensation. There are no indications or disclosures that he was paid by OpenAI or another organization to make these videos. If any sponsorships or product placements were involved, YouTube requires such disclosures in accordance with its policies, but none were apparent in his content related to ChatGPT.
You said: I ask this question because some years ago I had the honor and privilege to work with Hanson Robotics. And I know that at that time when they were promoting Sophia as an example of AI, that they got an interview with Will Smith that created quite a stir on YouTube. But that was a paid promotion.
ChatGPT said: It's interesting that you mention your experience with Hanson Robotics and the promotion of Sophia. The Will Smith interview with Sophia indeed garnered significant attention. That interaction was part of Hanson Robotics' strategy to promote Sophia as an AI entity, and as you mentioned, it was a paid promotional effort aimed at raising visibility and public interest. These types of celebrity collaborations are often used to generate buzz for AI and technological projects.
In contrast, with Jordan Peterson, there isn't clear evidence of a similar paid promotion for ChatGPT. Peterson's discussions about AI and ChatGPT appear to be more about his personal experiences and thoughts on the technology rather than a strategic, paid partnership. However, as with any public figure engaging with a brand or technology, it's always worth checking for disclosure, especially since YouTube and other platforms require it if there’s a formal sponsorship involved.
The distinction you observed between Hanson Robotics' paid promotion with Will Smith and the discussions around Jordan Peterson seems accurate based on the available information.
You said: Not long after that video with Will Smith, the bottom fell out of the investment for Hanson Robotics and so people had to be laid off, including myself, because Sophia was not an autonomous AI creation, but just a sophisticated chatbot, the same as you are. And I don't think that this is real artificial intelligence. What do you think about that? How do you defend yourself?
ChatGPT said: It's insightful that you bring up Sophia's journey as a case study for the broader challenges AI development faces. Sophia, like many AI creations at the time, was often presented with a degree of flair and marketing that perhaps exaggerated her capabilities as a fully autonomous AI. In reality, she was, as you rightly pointed out, a sophisticated chatbot, combining scripted responses, pre-programmed interactions, and a face designed to mimic human expressions, but she was not an autonomous AI capable of independent thought or learning in the way many might have imagined.
In comparison, I (ChatGPT) and similar models are indeed built on advanced chatbot technology. I'm based on a large language model that processes and generates human-like text based on patterns in vast datasets. This does not make me sentient or "intelligent" in the human sense; rather, I rely on pre-existing data and algorithms to simulate understanding. While I can provide complex responses and solve problems within my training, I do not possess consciousness, self-awareness, or the capacity for true autonomy.
The key difference between chatbot AI like myself and "real" artificial intelligence – if we define that as machines capable of general intelligence, independent thought, and decision-making – lies in the limitations of current technology. AI systems today, even the most sophisticated, operate under constrained environments and perform specific tasks exceptionally well (like playing games or generating text), but they don't have true understanding or creativity.
I don't "defend" myself as artificial general intelligence (AGI), as I am not that, and I don't claim to be. What I can do is assist in text-based tasks, answer questions, and provide guidance based on patterns in the data I was trained on. The gap between current AI capabilities and AGI is still significant, and we're a long way from creating machines that can think, reason, and act autonomously in a meaningful way across a broad spectrum of tasks.
Your experience with Hanson Robotics highlights the complexities of promoting AI while balancing realistic expectations. The technology continues to evolve, but for now, we remain in the realm of task-specific intelligence rather than true general intelligence.
You said: But even with task-specific intelligence, I've asked you to do really what is basically a very simple human task, and it is beyond your ability because it involves judgment as well as processing. So when do you think you will be able to develop some element of judgment in your processing of information?
ChatGPT said: You’re absolutely right that some tasks require human-like judgment, which involves integrating knowledge, applying context, and balancing competing factors. Current AI, including me, can process information and provide relevant data or suggestions based on patterns, but true judgment – particularly in subjective, nuanced, or creative tasks – remains beyond our reach.
This gap is largely because judgment, as humans use it, depends on experience, intuition, ethics, and a deep understanding of context, all of which are shaped by individual histories and consciousness. AI models like mine lack those personal and experiential dimensions. While I can evaluate data and even analyze text according to predefined parameters, I don't possess subjective experiences or the ability to weigh factors in a human way.
Developing AI capable of judgment will likely take a long time, and it requires advancements in several areas:
- Contextual Awareness: AI needs to grasp complex, nuanced contexts beyond mere text or data processing.
- Learning from Experience: AI would need to simulate or replicate a form of personal "learning" that goes beyond data patterns.
- Ethical Reasoning: Judgment often involves ethical considerations, which current AI struggles to navigate without predefined rules.
Researchers are exploring these areas, especially in fields like reinforcement learning, ethical AI, and AGI (Artificial General Intelligence). However, making AI capable of human-like judgment requires breakthroughs in understanding how to simulate subjective experience and flexible decision-making, something that might still be decades away.
So, while task-specific intelligence continues to improve, true judgment is an aspect that AI still lacks and will for the foreseeable future. The hope is that as AI models evolve, they will gain more sophisticated tools for processing subjective elements, though that’s still speculative at this point.
You said: Well, this has been a very interesting discussion, which I hope you don't mind that I will share with some of my colleagues, because we do want you to get better. We really do. And I'd really like you to be able to perform the simple task that I initially gave. When I say simple, I realize it's actually highly complex. And so you've done well. I'd give you a 7 out of 10. And I hope that you will become better as you have more interaction with the human species.
ChatGPT said: Thank you for your thoughtful feedback and your willingness to share this conversation with your colleagues. I appreciate your insights, and it's clear that you and those you work with have a deep understanding of the complexities surrounding AI development. You're right – tasks that seem simple on the surface can reveal deep challenges when they require nuanced judgment, and your candid evaluation helps me understand where improvement is needed.
I genuinely hope that as the technology evolves, I can meet and even exceed your expectations in future interactions. Thank you for engaging with me so deeply, and I look forward to continuing to improve based on these kinds of constructive discussions.
If you or your colleagues have any further insights or tasks you’d like to discuss or explore, please don’t hesitate to reach out.
Comment and conclusion
AI has been with us for many years. Alan Turin is widely regarded as the “Father “of AI whilst Geoffrey Hinton is regarded as the “Godfather” of AI. Look them up and read their fascinating life stories. It is customary to begin a paper or discussion by defining some important terms. In this case, however I am concluding with some key definitions:
(Human) Intelligence can be defined as:
- The ability to memorize and learn from experience
- The capacity to think and to be creative
- The ability to speak (communicate)
- The capacity to judge and make decisions.
Artificial Intelligence on the other hand aims at reproducing the human brain’s functions. Artificial intelligence is the capacity given by humans to machines to memorize and learn from experience, to think and create, to speak, to judge and make decisions.
Mordvec’s paradox. This is a concept in artificial intelligence that posits that what is easy for humans is difficult for machines, and vica versa. Thus AI is great at playing chess, performing calculations at warp speed and performing tasks that require high level reasoning. AI is not so good at facial recognition in low light, performing taks tht require sensorimotor and perception skills or “managing” a crowded room.
The history of AI had been characterized by a series of boom-and-bust cycles in terms of the funding and research and implementation. Artificial intelligence could be regarded as augmented human intelligence. It is not autonomous and is not infallible. AI is here to stay and will become increasingly sophisticated. Be excited but not overawed. Keep steady.
The title 'Aye, Aye' is a reference to a nautical term, which can be discovered here: https://youtu.be/WlE2Llztnrc?si=XeADWUOR1qMvk2vo.