paint-brush
Asimov Unknowingly Pioneered Modern Prompt Engineeringby@lojosoft
10,854 reads
10,854 reads

Asimov Unknowingly Pioneered Modern Prompt Engineering

by Login JonesMay 14th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Prompt engineering is a process in which input prompts to an AI large language model are crafted and refined to generate accurate, relevant, and useful output. It involves deliberate and systematic design and refinement of prompts and underlying data structures to manipulate AI systems towards achieving specific and desired outputs. With the emergence of AI, particularly natural language processing models, prompt engineering has gained significance.
featured image - Asimov Unknowingly Pioneered Modern Prompt Engineering
Login Jones HackerNoon profile picture

Isaac Asimov, a visionary in the realm of science fiction, unknowingly pioneered modern prompt engineering through his thought-provoking exploration of human-robot interactions in his groundbreaking Robot Series.

Prompt Engineering - The Background and History

The hottest new programming language is English - Andrej Karpathy (@karpathy)


Prompt engineering is a process in which input prompts to an AI large language model are crafted and refined to generate accurate, relevant, and useful output. It involves deliberate and systematic design and refinement of prompts and underlying data structures to manipulate AI systems towards achieving specific and desired outputs. With the emergence of AI, particularly natural language processing models, prompt engineering has gained significance as a means to improve the effectiveness and user experience of AI systems.


Prompt engineering combines elements of logic, coding, art and language.

Prompt Engineering Terms

Prompt Clarity: The prompt must be clear and unambiguous, leaving no room for misinterpretation by the AI.


Prompt Precision: Designed to target the specific information or output desired from the AI.


Prompt Context: Sufficient context within the prompt, such as background information or examples, is essential to guide the AI system towards producing the desired output.


Prompt Adaptability: yield expected and accurate results across differently trained AI models.


Chain of Thought Prompting: The prompt includes a chain of reasoning that illuminates the reasoning process required to solve the problem.


Least to Most prompting: Breaking a problem into sub problems then solving each one to lead the AI in a certain direction to the final solution.


Role Prompting: You specialize the context of the AI to a particular specialized role that will help lead to more accurate results.


One, Zero or Few shot prompting: Providing zero, one or a few examples of question/answers to help set the context for the AI and constrain it along a specific path and get more accurate results.

Asimov’s Robots Series


Asimov’s Robot universe is a vast and intricate world that spans numerous novels, short stories, and interconnected series. Set in a future where humans have colonized various planets throughout the galaxy, this universe is characterized by a clear divide between the Earth and the Spacer worlds.


Earth, overpopulated and technologically limited, is inhabited by humans who live in vast, domed cities known as caves of steel , where robots are generally feared and distrusted.


Spacer worlds, in contrast, are technologically advanced societies with a sparse population, where humans and robots coexist in harmony, and robots have become an essential part of everyday life. The Spacer worlds maintain a condescending attitude towards Earth and its inhabitants, seeing them as backward and inferior.


The Three Laws of Robotics is a concept central to the Robot universe, which serve as the guiding principles for robot behavior.


These laws, devised by Asimov, are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm;

  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; and

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Throughout Asimov's stories, the interactions between humans and robots, as well as the ethical and philosophical implications of the Three Laws, form the backbone of the Robot universe, offering readers a unique exploration of the challenges and potential consequences of a future where humanity and advanced artificial intelligence coexist.

Asimov: The Unconscious Prompt Engineer

Isaac Asimov's Robot series and short stories, starting from the 1950s, put a strong emphasis on the importance of giving precise commands to robots, which can be seen as a precursor to modern prompt engineering. Asimov's works demonstrated an inherent understanding of the need for carefully crafted instructions, particularly when dealing with complex AI systems implied in his robots operating under the Three Laws of Robotics.

Examples of Prompt Engineering from Asimov's Works

Mirror Image (Short story 1972)

Mirror Image, Baley interrogating the robots


During a casual interstellar trip by a group of spacers a crime happens on the spaceship. The two parties are a young and brilliant mathematician (Sabbat) and and elder and established mathematician (Humboldt), both are accusing the other of stealing a brilliant new mathematical idea from the other. The only witnesses are each mathematicians robot servants. The earthman detective Elijah Baley is asked to help investigate and solve the crime as soon as possible before it explodes into a much bigger scandal, however all he's allowed to do is interview the robots. Baley sees that each party is putting forward the mirror image of the other partys story, and he has to figure out which party is lying.


Detective Baley interrogates the younger mathematicians (Sabbats) robot and walks it through the logical steps that shows that the elder mathematician would come to greater harm through the robots testimony and gets the robot to change its testimony.


Here is an excerpt of the interrogation between Detective Elijah Baley and the robot server R. Idda, slight changes for brevity:


Baley: You are the personal robot of Gennao Sabbat, are you not?

Robot: I am sir.

Baley: For how long?

Robot: For twenty-two years, sir.

Baley: And your master's reputation is valuable to you?

Robot: Yes, sir.

Baley: Would you consider it of importance to protect that reputation?

Robot: Yes, sir.

Baley: As important to protect his reputation as his physical life?

Robot: No, sir.

Baley: As important to protect his reputation as the reputation of another?

Robot: Such cases must be decided on their individual merit, sir. There is no way of establishing a general rule.

Baley: If you decided that the reputation of your master were more important than that of another, say, that of Alfred Barr Humboldt, would you lie to protect your master's reputation?

Robot: I would, sir.

Baley: Did you lie in your testimony concerning your master in his controversy with Dr. Humboldt?

Robot: No, sir.

Baley: But if you were lying, you would deny you were lying in order to protect that lie, wouldn't you?

Robot: Yes, sir.

Baley: Well, then, let's consider this. Your master, Gennao Sabbat, is a young man of great reputation in mathematics, but he is a young man. If, in this controversy with Dr. Humboldt, he had succumbed to temptation and had acted unethically, he would suffer a certain eclipse of reputation, but he is young and would have ample time to recover. He would have many intellectual triumphs ahead of him and men would eventually look upon this plagiaristic attempt as the mistake of a hot-blooded youth, deficient in judgment. It would be something that would be made up for in the future. If, on the other hand, it were Dr. Humboldt who succumbed to temptation, the matter would be much more serious. He is an old man whose great deeds have spread over centuries. His reputation has been unblemished hitherto. All of that, however, would be forgotten in the light of this one crime of his later years, and he would have no opportunity to make up for it in the comparatively short time remaining to him. There would be little more that he could accomplish. There would be so many more years of work ruined in Humboldt's case than in that of your master and so much less opportunity to win back his position. You see, don't you, that Humboldt faces the worse situation and deserves the greater consideration?

Robot: My evidence was a lie. It was Dr. Humboldt

Baley: You are instructed to say nothing to anyone about this until given permission by the captain of the ship


When Baley interrogates the elder mathematician Humboldts robot servant R. Preston, the interrogation goes exactly the same except for the part at the end, which goes like this:


Baley: But if you were lying, you would deny you were lying, in order to protect that lie, wouldn't you?

Robot: Yes, sir.

Baley: Well, then, let's consider this. Your master, Alfred Barr Humboldt, is an old man of great reputation in mathematics, but he is an old man. If, in this controversy with Dr. Sabbat, he had succumbed to temptation and had acted unethically, he would suffer a certain eclipse of reputation, but his great age and his centuries of accomplishments would stand against that and would win out. Men would look upon this plagiaristic attempt as the mistake of a perhaps-sick old man, no longer certain in judgment. If, on the other hand, it were Dr. Sabbat who had succumbed to temptation, the matter would be much more serious. He is a young man, with a far less secure reputation. He would ordinarily have centuries ahead of him in which he might accumulate knowledge and achieve great things. This will be closed to him, now, obscured by one mistake of his youth. He has a much longer future to lose than your master has. You see, don't you, that Sabbat faces the worse situation and deserves the greater consideration?

Robot: My evidence was as I-

Baley: Please continue, R. Preston.

Daneel: I am afraid, friend Elijah, that R. Preston is in stasis [has crashed]. He is out of commission.


In the short story Detective Baley uses this difference in the robots responses to set a trap and trick the actual thief into confessing.


Here we can see Asimov use Least to most prompting deployed by Baley whilst interrogating the robots. For both robots he wants to find out if there's any asymmetry in their experience (i.e. which one is lying) and his approach is to lead them down a reasoning path where he ultimately sets a complex moral question at the end.


Ultimately in the story Baley uses a combination of this asymmetry of the robot responses and his intuition of human nature to solve the case, but its very interesting to see Asimov predict the nuances required to interact with human level AI and in fact he bases this seminal science fiction series work on that fact.

Runaround (1942)

Speedy running around confused on Mercury


In this short story, the unusually expensive robot Speedy is sent on a mission to retrieve an element on a dangerous planet. Because this Speedy is expensive he is programmed to follow the 3rd law (A robot must protect its own existence as long as such protection does not conflict with the First or Second Law) more strongly than normal.


Powell and Donovan, the human protagonists, assign Speedy the task of retrieving selenium from a selenium pool. The humans need this to recharge their power cells, which are running low, and protect themselves from the heat. However, they inadvertently create a conflict between the Second and Third Laws of Robotics by giving Speedy an imprecise command that does not emphasize the importance of the mission. They instruct Speedy, "Go out and get it [the selenium]." Due to the danger posed by the selenium pool and Speedy's propensity to follow the 3rd law more strongly than normal, Speedy finds itself stuck in a loop, unable to prioritize his orders (Second Law) over its self-preservation (Third Law).


The issue is eventually resolved by Powell placing himself in danger, which invokes the First Law and compels Speedy to prioritize saving him. Powell and Donovan give Speedy an imprecise command at the beginning:


Then, he said, "Listen, Mike, what did you say to Speedy when you sent him after the selenium?"


Donovan was taken aback. "Well damn it - I don’t know. I just told him to get it."


"Yes, I know, but how? Try to remember the exact words."

"I said... uh... I said: 'Speedy, we need some selenium. You can get it such-and-such a place. Go get it' - that’s all. What more did you want me to say?"


The key here is that this command given by Donovan I just told him to get it was imprecise because it did not contain urgency. In Asimovs Robots universe the tone and delivery of a command are just additional variables of the prompt itself. So because the tone wasn't particularly urgent on the command it led to a conflict between the Three Laws.


Because speedy is stuck in a loop and cannot accept another prompt thats been iterated on and reformulated with more accuracy, the only way to get the correct action was to change other variables in the universe so that the initial imprecise prompt would lead to the desired output. Powell eventually solves the issue by placing himself in danger, forcing Speedy to prioritize saving him (1st law took priority) and broke him out of his deadlock between the 2nd and 3rd law mandates.


This story shows how not using the proper context in the prompt (order to Speedy) led to inaccurate results. The proper context being this excerpt from Runaround:


The only thing that could save them was selenium. The only thing that could get the selenium was Speedy. If Soeedy didn’t come back, no selenium. No selenium, no photocell banks. No photo-banks - well, death by slow broiling is one of the more unpleasant ways of being done in.


Donovan rubbed his red mop of hair savagely and expressed himself with bitterness.


"We'll be the laughingstock of the System, Greg. How can everything have gone so wrong so soon? The great team of Powell and Donovan is sent out to Mercury to report on the advisability of reopening the Sunside Mining Station with modern techniques and robots and we ruin everything the first day. A purely routine job, too. We'll never live it down."


"We won't have to, perhaps," replied Powell, quietly. "If we don’t do something quickly, living anything down - or even just plain living - will be out of the question."


The prompt also suffered from a lack of adaptability, a good prompt should be capable of yielding accurate results on different AI systems. Donovan says that he gave speedy a standard order (prompt) to get the selenium.


Donovan: "I said... uh... I said: ‘Speedy, we need some selenium. You can get it such-and-such a place. Go get it - that’s all. What more did you want me to say?"


Powell: "You didn't put any urgency into the order, did you?"


Donovan: "What for? It was pure routine."

The incorrect assumption here is that a simple order/prompt to get selenium, which would work fine on any other robot/AI would work the same on Speedy, but since we know that Speedy's 'positronic brain'/neural net is trained differently (3rd law of self preservation is strengthened) Speedy is not a standard AI. Therefore a more adaptable prompt/order should have been used.

The principles of clarity, context and adaptability of prompts given to AI in order to get accurate results is a core concept with prompt engineering. It's generally understood that The more descriptive and detailed the prompt is, the better the results. PromptingGuide.ai. In this story (first written in 1942) Asimov shows in detail how not following these rules can lead to inaccurate results.

Caves of Steel (1954)

Dr. Gerrigel examining Daneel


"Caves of Steel" was first published in 1954 and is the first in a series of novels set in the Robot Universe and introduces the characters Detective Elijah Baley and Robot Daneel Olivaw.


The story is set in a far future Earths inhabitants lives in large, domed cities and they harbor deep resentment towards the Spacers, a group of humans who have colonized other planets and embraced advanced technology and robotics. Asimov uses the buddy cop narrative to explore themes of prejudice, AI, technology, and cooperation. The partnership between Baley and Daneel serves as the cornerstone for Asimov's Robot Series, which continues to delve into the dynamic relationship between humans and robots/AI, as well as the challenges they face in coexistence.


There's a short but very clever scene in the chapters "Words From An Expert / Shift To The Machine" that shows that even in 1954 Asimov predicted that there would be a need to evaluate the effectiveness of AI and that the evaluation could be very invasive but there would also be a method of easier evaluation to quickly check the health and accuracy of a model.


The scene in question involves an Earth roboticist (Dr. Gerrigel) who's been asked by Baley to do an evaluation of Robot Daneel Olivaw to verify that its correctly had the 1st law installed (basically an accurate model).


When offered the computer laboraties for any equipment he might need Dr. Gerrigel responds:


Dr. Gerrigel: My dear Mr. Baley, I won’t need a laboratory.

Baley: Why not?

Dr. Gerrigel: It’s not difficult to test the First Law. ... it’s simple enough.

Baley: Would you explain what you mean? Are you saying that you can test him here?

Dr. Gerrigel: “Yes, of course. Look, Mr. Baley, I’ll give you an analogy. If I were a Doctor of Medicine and had to test a patient’s blood sugar, I’d need a chemical laboratory. If I needed to measure his basal metabolic rate, or test his cortical function, or check his genes to pinpoint a congenital malfunction, I’d need elaborate equipment. On the other hand, I could check whether he were blind by merely passing my hand before his eyes and I could test whether he were dead by merely feeling his pulse. “What I’m getting at is that the more important and fundamental the property being tested, the simpler the needed equipment. It’s the same in a robot. The First Law is fundamental. It affects everything. If it were absent, the robot could not react properly in two dozen obvious ways.”


The description of the actual evaluation that Dr. Gerrigel performs on Daneel is described thus:

What followed confused and disappointed him.

Dr. Gerrigel proceeded to ask questions and perform actions that seemed without meaning, punctuated by references to his triple slide rule and occasionally to the viewer.

At one time, he asked, “If I have two cousins, five years apart in age, and the younger is a girl, what sex is the older?”

Daneel answered (inevitably, Baley thought), “It is impossible to say on the information given.”

To which Dr. Gerrigel’s only response, aside from a glance at his stop watch, was to extend his right hand as far as he could sideways and to say, “Would you touch the tip of my middle finger with the tip of the third finger of your left hand?”

Daneel did that promptly and easily.

In fifteen minutes, not more, Dr. Gerrigel was finished.


This is not dissimilar to modern approaches to evaluating Large Language Models (LLMs). LLMs can be evaluated with a more involved approach that involves integrating it into other apps and processes called extrinsic evaluation and a more introspective but quicker approach that involves evaluating the AI LLM directly called intrinsic evaluation. The evaluation of a model is done with measures like perplexity and entropy using mathematical formulas on the data set.


When Dr. Gerrigel evaluates Daneel, he conducts a series of tests to assess the robot's physical and functional properties to determine if it is indeed a robot and to understand if it's been installed with the 1st law properly. Similarly, intrinsic evaluation of a large language model involves analyzing its inner workings and performance on specific tasks to understand how well it has learned language patterns, relationships, and knowledge from the training data.


It often includes measuring its performance on various linguistic tasks, such as predicting the next word in a sentence, answering questions, or summarizing text. Researchers may also analyze the model's internal representations, such as examining the learned embeddings or attention mechanisms, to gain insights into the linguistic knowledge it has acquired during training. These evaluations help to determine the model's strengths and weaknesses, as well as its ability to understand and generate human-like language.


In both cases, the evaluations are designed to assess the capabilities of the subject (Daneel or a large language model) and to gain insights into their underlying mechanisms.


Even though Asimov doesn't do much worldbuilding around the details of what his 'intrinsic evaluation' method by Dr. Gerrigel of Daneel was, it's astonishing that Asimov predicted this type of evaluation of AI would be used 70 years ago.

Conclusion


These are just a few examples of how Isaac Asimov delved into the intricate relationship between AI and humanity, anticipating the importance of prompt engineering in eliciting higher quality responses from AI and robots. Asimov's Robot Series represents speculative science fiction that has become increasingly relevant due to the widespread success of large language models and AI. This seminal body of work offers valuable historical context and insight for data scientists and machine learning engineers, shedding light on the origins of many contemporary ideas and inspirations in the field.


References

Learn Prompting

Microsoft

What is Prompt Engineering?

Prompt Engineering concepts and use cases

Least to most prompting

Prompting tips

Evaluating Language Models in NLP

Choosing The Right Prompt Types

Prompt Engineering Guide

Prompt Engineering Reddit

Andrej Karpathy homepage

Entropy in data science

Perplexity in NLP

Least to most prompting