paint-brush
GPTerm: Creating Intelligent Terminal Apps with ChatGPT and LLM Modelsby@ademakdogan
420 reads
420 reads

GPTerm: Creating Intelligent Terminal Apps with ChatGPT and LLM Models

by ademJuly 10th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this article, the exciting realm of making terminal applications smarter is delved into by integrating ChatGPT, a cutting-edge language model developed by OpenAI. The focus is on exploring how ChatGPT and other LLM models such as WizardCoder, MPT can revolutionize the user experience by enabling natural language interactions, thus opening up new possibilities for enhanced productivity and efficiency. To access the project directly, please click the provided link.
featured image - GPTerm: Creating Intelligent Terminal Apps with ChatGPT and LLM Models
adem HackerNoon profile picture


In this article, the exciting realm of making terminal applications smarter is delved into by integrating ChatGPT, a cutting-edge language model developed by OpenAI. The focus is on exploring how ChatGPT and other LLM models such as WizardCoder, MPT can revolutionize the user experience by enabling natural language interactions, thus opening up new possibilities for enhanced productivity and efficiency. To access the project directly, please click the provided link.


Large Language models have revolutionized the field of natural language processing, empowering developers and researchers to build intelligent applications. In this article, we explores how these models bring intelligence to terminal applications. Three different models were used for this purpose. ChatGPT, MPT and WizardCoder.


ChatGPT, developed by OpenAI, is a state-of-the-art language model designed for generating human-like text responses. It excels in engaging in interactive and dynamic conversations with users. From providing detailed answers to answering questions, ChatGPT showcases the power of language models in natural language understanding and generation.


MPT is a multilingual variant of the popular Transformer model. It has been trained on a vast corpus of text from different languages, enabling it to understand and generate text in multiple languages. MPT enables cross-lingual applications, machine translation, and language understanding tasks, bridging communication gaps across diverse linguistic communities.


WizardCoder is an innovative language model specifically designed for code generation and programming assistance. It understands programming languages and can generate code snippets based on user prompts. With WizardCoder, developers can leverage the power of language models to speed up their coding tasks, explore code suggestions, and gain insights into best coding practices.

GPTerm

This project focuses on the conversion of plain text into shell commands using ChatGPT and open-source language models. While some models yielded satisfactory results, others fell short of expectations. The primary terminal application used in this study is iTerm, although the project is anticipated to be compatible with other terminal applications as well, even though specific tests have not been conducted.


The project is divided into two main parts. In the first part, the user can manually enter and execute shell commands within the current plugin without closing it. The second part involves the translation of given plain text into shell commands, which are then presented to the user. If desired, the user has the flexibility to modify or remove certain sections of the generated command without executing it. Both sections function in a similar manner.

To differentiate between manual input of shell commands and obtaining shell commands from plain text, a dot (.) must be placed at the beginning of the plain text. This allows the application to determine the user’s intention accurately.


The project aims to serve as a convenient tool for users who need quick reminders or clarification regarding shell commands they may have forgotten or become confused about. It also provides suggestions for alternative and more efficient commands than the ones they intend to write. Users can also verify the correctness of their commands, all without the need for extensive internet searches.


This project involves conducting experiments with three different models to explore their performance and obtain varying results. The initial focus was on the ChatGPT model, which yielded the most successful and reliable outcomes. Since generating shell commands often involves small-word scripts, the number of tokens remains low, resulting in cost-effectiveness. As a result, ChatGPT is a suitable choice. However, tests were also conducted with other models. In addition to ChatGPT, the WizardCoder and MPT models were also utilized, focusing on CPU performance by employing quantized models. However, it is recommended to conduct tests with the original states of the models, as higher success rates are anticipated in such scenarios. The MPT model correctly interpreting some commands while providing incorrect answers in other cases. It was also observed that there were instances where the response was non-responsive. Given that no customized model based on code exists, this behavior can be considered natural.


Based on observations, the WizardCoder model outperforms the MPT model. Although it may appear slightly less advanced compared to ChatGPT, it still offers acceptable results for those interested in utilizing open-source models.

It is important to note that while results obtained from ChatGPT are delivered in a highly structured JSON format, other open-source models may occasionally produce noisy responses to questions. Below, you will find the results obtained from open-source models.


1-
{
  "command": "mkdir creator && cp /new/hua/* creator/"
}
```<|im_end|>

2-
~~~json
        {
            "command": "ls -l /storage | grep pdf | sort -r"
        }
        ~~~
3-
'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n    
    {"command": "cd storage && touch ny.py"     }'


In particular, when working with open-source models, occasional instances of noisy results, as depicted above, may arise. To solve this problem, a regular expression (regex) approach has been implemented to effectively remove any noise from such outcomes. In future iterations, the regex pattern will be refined to encompass all possible error cases. However, for the current implementation, the pattern has been tailored to address the most prevalent sources of noise, considering the constraints of time. However, in the current implementation, the pattern has been adjusted to effectively handle the main sources of noise, taking into account the time limitations.

Usage

To begin with, it is recommended to work within an environment. In this project, the conda environment “py1” (sample conda env name) is utilized for development. Then;

pip3 install gpterm-tool


You can run the provided above command for installation purposes. Once installed, it can be utilized by using the ‘gpterm’ keyword on iTerm or any other terminal application.


  • Run with ChatGPT model (Highly Recommended)
gpterm -k <openai_api_key>
  • Run with MPT model
gpterm -m mpt -p <quantized_mpt_model_path>
  • Run with WizardCoder model
gpterm -m wizardcoder -p <quantized_wizardcoder_model_path>


As the models used have a large number of parameters and are executed on the CPU, the processing speed of the results may be slower. The project was developed on an M1 MacBook Pro, and no tests with GPU implementation have been performed yet. Hence, for professional use, it is recommended to opt for the ChatGPT model.


For swift access and usage of this program, you have the option to include an alias in the zshrc file. This allows for convenient and rapid execution of the program. My conda env name is py1.

alias gt='conda activate py1 && gpterm -k <openai_api_key>'

Following that, the program can be easily launched via the terminal by simply entering the “gp” keyword.

You can see the answers given by ChatGPT and WizardCoder models to the same questions below.


ChatGPT vs WizardCoder


  • ChatGPT
name_ai ---> . go in storage folder and sort only pdf files reversely

>>> cd storage && ls -r *.pdf

name_ai ---> . Create a folder named sample_1 and create a txt file for each number starting from 1 to 10 in it and assign that number as the name

>>> mkdir sample_1 && for i in {1..10}; do touch sample_1/$i.txt; done

name_ai ---> . Create a file named nw_1 with .txt extension and add all the numbers from 1 to 10 in it

>>> touch nw_1.txt; echo {1..10} >> nw_1.txt


  • WizardCoder
name_ai ---> . go in storage folder and sort only pdf files reversely

>>> ls -l storage | grep pdf | sort -r

name_ai ---> . Create a folder named sample_1 and create a txt file for each number starting from 1 to 10 in it and assign that number as the name

>>> mkdir sample_1 && cd sample_1 \\ Wrong one here !

name_ai ---> . Create a file named nw_1 with .txt extension and add all the numbers from 1 to 10 in it

>>> touch nw_1.txt && echo '1 2 3 4 5 6 7 8 9 10' >> nw_1.txt


ChatGPT provides highly accurate responses. In the WizardCoder model, it successfully generated the correct command in two out of the three requests, although it did produce an incorrect command in one request.As mentioned previously, chatgpt is highly recommended for professional applications. Nevertheless, by retraining the WizardCoder model using the commands that are frequently used, its effectiveness can be enhanced.


Project Repo : https://github.com/ademakdogan/GPTerm

Github : https://github.com/ademakdogan

Linkedin : https://www.linkedin.com/in/adem-akdoğan-948334177/

References

  • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. arXiv preprint arXiv:1911.02116.
  • Yin, P., Deng, L., Li, J., & Zettlemoyer, L. (2021). Model Programming Transformer. arXiv preprint arXiv:2105.14947.
  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
  • Luo, Z., Xu, C., Zhao, P., Sun, Q., Geng, X., Hu, W., … & Jiang, D. (2023). WizardCoder: Empowering Code Large Language Models with Evol-Instruct. arXiv preprint arXiv:2306.08568.
  • Gu, J., Li, Z., Zhang, H., Wang, H., Liu, S., Wu, Q., … & Sun, X. (2021). Llama: Language Model Pre-training Without Masking. arXiv preprint arXiv:2102.10219.