AI code generation is a field of artificial intelligence that uses machine learning algorithms to generate code automatically. One of the recent developments in this area is using newer natural language processing (NLP) models and AI to improve the accuracy and efficiency of code generation.
GitHub's innovative product, CoPilot, was launched in October 2021. CoPilot allows users to suggest code snippets based on user input using code comments and method skeletons. In addition, coPilot will suggest code improvements or completions like a very advanced auto-complete in your IDE.
The talk of the town has been chatGPT since its launch on the 30th of November 2022. ChatGPT has seen stellar growth, reaching one million users in just five days, a new world record. ChatGPT employs a GPT-3.5 language model variant to generate code based on natural language input. ChatGPT allows users to describe the code they want in a chat-like setting (hence the name), and the chatbot will generate the corresponding code in various programming languages.
With these two tools now being available, let's see what the fuss is all about! can these tools significantly improve the speed and accuracy of software development? But first, let's inspect them more closely.
One of the main advantages of ChatGPT is its natural language processing (NLP) capabilities. This means developers can type out the code they want in plain English, and ChatGPT will generate the corresponding code.
Let’s look at a concrete example of what ChatGPT can do. Imagine you are in a situation where you want to classify Jira tickets into two categories - those that describe work on new features and those that describe maintenance.
Then, on a simple request, “Create a TensorFlow model for classifying Jira issue tickets into two categories: features and not-features”, the bot had a very comprehensive explanation:
To create a TensorFlow model for classifying Jira issue tickets into two categories: features and not-features, you will need to follow the steps below:
- Collect and label a dataset of Jira issue tickets. This dataset should contain a large number of Jira issue tickets, with each ticket labeled as either a "feature" or "not-feature" ticket.
- Preprocess the text data in the Jira issue tickets to prepare it for use in a machine learning model. This may include steps such as tokenizing the text, removing punctuation and stopwords, and lemmatizing the words to reduce them to their base forms.
- Split the dataset into training and testing sets. The training set will be used to train the model, while the testing set will be used to evaluate the model's performance.
- Train a machine learning model using the training set. In this case, a good choice for the model might be a convolutional neural network (CNN) or a recurrent neural network (RNN), as these are commonly used for text classification tasks.
- Evaluate the performance of the model on the testing set, and iterate on the model and the preprocessing steps until the desired level of accuracy is achieved.
- Once the model is performing well on the testing set, it can be used on new, unseen Jira issue tickets to classify them into the two categories: features and not-features.
Ok, so far, so good - a generic explanation of creating a neural network. Probably anyone trying to develop an ML solution for something would know that, but still - good job ChatGPT. But now the code:
# Import libraries import tensorflow as tf from tensorflow import keras from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences # Load data and split into training and testing sets data = ... X_train, X_test, y_train, y_test = ... # Preprocess data by converting text to numerical vectors tokenizer = Tokenizer() tokenizer.fit_on_texts(X_train) X_train = tokenizer.texts_to_sequences(X_train) X_test = tokenizer.texts_to_sequences(X_test) X_train = pad_sequences(X_train) X_test = pad_sequences(X_test) # Load pre-trained word embedding model embedding_model = ... # Build neural network model model = keras.Sequential() model.add(keras.layers.Embedding(len(tokenizer.word_index)+1, embedding_model.vector_size, weights=[embedding_model.wv.vectors], input_length=X_train.shape, trainable=False)) model.add(keras.layers.Dense(128, activation='relu')) model.add(keras.layers.Dense(2, activation='softmax')) # Compile model model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # Train model model.fit(X_train, y_train, epochs=10) # Evaluate model on testing data model.evaluate(X_test, y_test) # Use model to predict classification of new Jira issue tickets new_tickets = ... new_tickets = tokenizer.texts_to_sequences(new_tickets) new_tickets = pad_sequences(new_tickets) predictions = model.predict(new_tickets)
Pretty decent! The embedding model is missing, but no worries, you can always ask the bot to suggest something, and it will gladly help. Can you use ChatGPT and save money by not hiring a Junior Data Scientist? Probably not, but soon enough, many junior positions might be required to use this code generation technology to boost their skills and spend less time ‘being junior’.
CoPilot is a natural language processing tool that helps developers generate code in real time by completing code snippets based on user input. It is designed to assist developers in completing simple tasks such as solving Leetcode problems (simple, yeah…) or debugging code. There exist multiple CoPilot extensions for many IDEs, including the most popular ones: JetBrains IntelliJ and VSCode.
What makes CoPilot special and unique is that it uses a machine learning model trained on GitHub data to understand the context of the conversation and provide appropriate code suggestions based on the user's input.
This makes CoPilot a valuable resource for developers looking to quickly complete development tasks that don’t require too much context around them - CoPilot can and does pick up the context of the code. Still, its NLP capabilities are limited compared to the ChatGPT regarding understanding context.
Let’s try to give CoPilot the same task we gave ChatGPT and see how it performs. Instead of writing a single sentence to a chat, we will have to describe what we want in a comment and wait for CoPilot’s code suggestions. The only things that were written here are lines 2 and 3. The rest was CoPilot’s suggestion (including comments).
def create_model(): """Create a model that would be used for classification of Jira ticket into 2 categories: feature or not feature.""" # Load data and split into training and testing sets
generate using copilot
pick a suggestion
def create_model(): """Create a model that would be used for classification of Jira ticket into 2 categories: feature or not feature.""" # Load data and split into training and testing sets data = pd.read_csv("jira.csv") train_data = data.sample(frac=0.8, random_state=0) test_data = data.drop(train_data.index) # Tokenize data tokenizer = Tokenizer(num_words=1000, oov_token="<OOV>") tokenizer.fit_on_texts(train_data["Summary"]) word_index = tokenizer.word_index train_sequences = tokenizer.texts_to_sequences(train_data["Summary"]) train_padded = pad_sequences(train_sequences, padding='post', maxlen=100) test_sequences = tokenizer.texts_to_sequences(test_data["Summary"]) test_padded = pad_sequences(test_sequences, padding='post', maxlen=100) # Define the model model = keras.Sequential([ keras.layers.Embedding(1000, 16, input_length=100), keras.layers.GlobalAveragePooling1D(), keras.layers.Dense(24, activation='relu'), keras.layers.Dense(1, activation='sigmoid') ]) # Compile the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Train the model model.fit(train_padded, train_data["Feature"], epochs=30) # Test the model test_loss, test_acc = model.evaluate(test_padded, test_data["Feature"]) print('Test accuracy:', test_acc) # Save the model model.save("jira_model.h5") return model
Another fine implementation! The accuracy of each solution greatly depends on provided data and the quality of its preprocessing. Also hyper-parameter tuning. But on the surface, both AI-s gave good code suggestions that require further human editing.
Comparing Copilot to chatGPT two?
CoPilot and ChatGPT are two popular tools in the software development community, and at first glance, it might seem like they are direct competitors. However, a closer examination reveals that they have different code generation capabilities and serve different purposes, even though some overlaps exist.
CoPilot’s main task is code completion and helping developers boost their productivity by writing simple, repetitive code snippets for them. On the other hand, ChatGPT is a chatbot framework that allows developers to build and deploy chatbots using the GPT-3 language model. While it can assist developers with programming tasks, it is not primarily focused on code generation. Instead, ChatGPT is designed to help developers brainstorm ideas, understand complex concepts, and get guidance on best practices. It can be used similarly to seek answers on StackOverflow or have conversations with other software engineers.
ChatGPT can help you design large systems, providing you with a roadmap of how it can be done, while CoPilot will help you better suggest a concrete function implementation. For example: ChatGPT can develop a syntax for a new programming language, create context-free grammar, and recommend steps to implement a compiler. Conversely: CoPilot will help you create simple functions like parseExpression() by analyzing your project's structure and overall context.
In summary, both CoPilot and ChatGPT are valuable tools for software developers, but they have different code-generation capabilities and serve different purposes. For example, coPilot is geared towards code completion, while ChatGPT focuses more on inspiration and explanation and only then on code generation. By understanding the unique capabilities of each tool, developers can choose the one that best fits their needs.