Prompts as Source Code, not Conversations
Discover why prompts are more akin to source code than simple instructions and how Vertesia is shaping the future of LLM integration in apps.
This blog explores how technical teams can leverage LLMs to automatically generate software release notes.
Release notes are an important piece of information for communicating with the customers about the recent changes of your product. It allows users to understand the new features of the products, the major improvements of existing features, bug-fixes, and other updates that they care about. Not only does it keep users informed about what’s happening in the product, it also provides a historical record of the changes, helps users to take advantage of changes, and more. But writing release notes can take tens of hours of work easily.
There are several reasons:
At Vertesia, we create release notes with the help of LLM. The creation of our release notes is divided into two parts: data collection, release notes generation, and manual review. In the first part, data collection, we collect all kinds of information related to the release notes to prepare for the generation. And in the second part, we generate the actual release notes and translate them using a large-language-model (LLM). And finally, we let our release manager review the changes and perform manual adjustments before publishing the release notes.
We collect different kinds of data from the project management tool (GitHub Issues) and from our version control system (Git and GitHub pull-requests). We put all kinds of information into one single place for content generation: the Memory Pack. Memory Pack is an image that packages the context to be used when expanding LLM prompts. It allows users to decouple the data collection of the adjustment and the execution, and it serves as an immutable context for the LLM.
Using a Memory Pack has multiple advantages. First of all, it allows us to decouple data collection from the data generation, it serves as a boundary between two phrases. Oftentimes, the release notes generation requires improvements, but we were not sure whether they refer to the prompts or they refer to the quality of the data that we provided as inputs. Having the Memory Pack allows us to easily inspect the content of the data. If the content of some file is empty, it is obvious that the problem happened during the data collection process. Secondly, it allows us to store an important amount of data for the prompts as the context for the LLM. Typically, if you want to provide a list of issues and the code difference as part of the prompt, it could be hard to do so without a Memory Pack. Most of the online editors only support small chunks of data. Finally, using Memory Pack allows you to create a small data collection pipeline as a Recipe using Memory Commands in Typescript. We will discuss more in detail later in this post.
Now, let’s take a look at the file structure of a Memory Pack. If we extract the tarball, we will see different sources of data, represented as a file or a directory. In this example, we can see that GitHub Issues are grouped into a directory, GitHub pull-requests are grouped into another directory, and all the commits and the code differences are stored in a single file. This is the structure we had chosen for our team, but you are free to adapt the structure to fit your needs. The only mandatory file is metadata.json, which contains the properties served as the input arguments for the Memory Pack. They are useful for the prompts.
Unset
memories_studio-release-notes_xxx-6c1eec0
├── commits.txt
├── issues
│ ├── 528.txt
│ ├── 621.txt
│ ├── 654.txt
│ └── 660.txt
├── metadata.json
├── pull_requests
│ ├── 358.txt
│ ├── 377.txt
│ ├── 599.txt
│ ├── ...
│ └── 664.txt
└── range_diff.txt
3 directories, 28 files
Now you understand the basics of the Memory Pack, let’s take a look into the Recipe. Recipe is a text document written in Typescript, that contains all the commands a user could call on the command line to assemble a Memory Pack. The Vertesia CLI can build Memory Pack automatically by reading the instructions from a Recipe. Here are some key instructions that could be used when collecting information for release notes:
Instruction |
Description |
vars |
The variables for specifying the target version of the release, and the previous version for the comparison. |
exec |
Execute a shell command to gather information related to your Git repository or information on GitHub. |
copyText |
Copy the content of an inline text as an image entry. |
copy |
Copy a file to the image. |
You can see the full list of instructions on our GitHub repository.
A Recipe can be used for building a Memory Pack via Vertesia CLI. When doing so, the CLI will execute the instructions inside the script “release-notes.ts”, and upload the final result to Google Cloud Storage. This command assumes that you have a project registered on our platform.
Unset
vertesia memo build \
--out "memory:release-notes-v2" \
--var-start "v1.0" \
--var-end "v2.0" \
recipes/release-notes.ts
The release notes generation is then split into two parts: the generation and the translation.
For the generation part, you can rely on the Interaction feature of the Vertesia Studio. An Interaction defines the tasks that the LLM is requested to perform. Inside the Interaction for release notes generation, you can specify the instructions for the LLM as one or multiple prompts. Typically, you can specify the target audience, the structure of the release notes, how to categorize the items, the format of each item, etc. When running an Interaction, you can specify the Memory Pack so that you can inject some pieces of values of the Memory Pack to the Interaction. You can also switch the LLM model and decide which one is the best in terms of performance and cost. You can also fine-tune the LLM options to adjust the result, such as adjusting the temperature to control the hallucinations. To integrate this Interaction into your system, you can rely on the Vertesia command line to call the Interaction. Here is example:
Unset
mappings=$(cat << EOF
{
"@memory": "my_memory_pack",
"@": "@",
"target_audience": "customers, partners, and developers",
"issues": "@content:issues/*",
"pull_requests": "@content:pull_requests/*",
"code_diff": "@content:range_diff.txt",
"commits": "@content:commits.txt"
}
EOF
)
vertesia run GenerateReleaseNotes -d "$mappings" > "v2.0.md"
For the translation part, you can rely on another Interaction to translate the content for you. Compared to using another translation service, using Vertesia allows you to group both needs together and simplify the translation. You can also use another LLM, less perform then the previous one, since translating the content is relatively easier then the synthesizing of all the changes between two releases.
If we put all the pieces together, here is a diagram summarizing the key steps of the release notes generation. In the business layer (yellow), you can see the processes for generating the release notes, in the application layer (blue), you can see how those actions can be achieved using the Vertesia Platform. And in the technology layer (green), you can see what are the technologies used for supporting the generation.
Why should you use Vertesia to generate release notes? Here are some benefits from what I see:
In this blog, we shared how we can generate release notes easily with Vertesia. We introduced the new concept of the Recipe (Memory Commands) and the Memory Pack, which provides a portable solution for data collection and data storage. Then we saw how to use Studio to write a prompt and run the prompt inside an Interaction to interact with a LLM. Finally, we showed the whole solution and shared some benefits of choosing Vertesia for this kind of task.
To see Vertesia in action, schedule a live demo with one of our LLM experts.
Discover why prompts are more akin to source code than simple instructions and how Vertesia is shaping the future of LLM integration in apps.
A vision for revolutionizing content in the age of Large Language Models and harnessing their true potential.
Unveiling Vertesia, a groundbreaking platform for Large Language Models (LLMs), transforming business applications and interactions.
Be the first to know about new blog articles from Vertesia. Stay up to date on industry trends, news, product updates, and more.