This introduces an approach to writing software that uses generative AI tools to iteratively write the views, and navigational structure for a mobile app using a set of product requirements. Over time I hope this system can grow to accommodate more and more aspects of mobile app development.
Using this system, the prompts used by the AI to generate the source code are not written each time by the developer, but are saved in libraries that are pre-configured to code common problems using a particular pattern. For example you might choose to use a library that implements a set of views using the MVVM pattern or you could choose The Composable Architecture (TCA). Using this system the same requirements could be used to generate the underlying structure for either approach.
This results in a new type of development environment where you write software by taking feature specifications encoding a description of your product requirements, and prompt generators that encode a chosen approach to how to implement those requirements.
Before diving into how I’ve started building this sort of a system. I think it’s worth expanding a little bit on why I think it could be useful.
This is a high level summary of how the Generative AI App Builder tool is built from a set of components that can be chosen and adapted by the user to give flexibility and control over the technical design of the solution generated.
Please Note: This system is being put together as a side project, it is constantly evolving, and I expect parts, or all of this to be overtaken by more fully funded ‘big-tech’ projects. However, at the time of writing this I don’t know of any tool in the market that does what I’m proposing here, so for now I’m ploughing ahead!
The digram below shows how the Generative AI App Builder sits alongside your source code. It is generally recommended that these are 2 separate repositories to avoid adding any unnecessary files to your application source repository.
The structure within the App Builder can be split into 3 main sections:
Currently this system uses Aider Chat to execute the prompt inferencing commands, follow the installation instructions to get this setup and available through the command line. It is recommended to use some sort of python environment manager such as pyenv to sandbox your installation of Aider.
Provide the feature requirements needed to describe your product requirements. The format of the requirements must be defined by the package generating the code. For example, to implement a view and data structure with MVVM the MVVM pattern needs to know about the view itself that you want to create, and the ‘domain’ models that you want to be represented by this view. At the time of writing this, there are a very small number of ‘design pattern packages’ (see note about side project), however if this approach grows, then we may need to abstract the specification of features to a common format that can be used interchangeably between implementation patterns.
Configure constants used in the Gen AI script such as the path to your application project from the place where you’re running the compiled script.
The entry point to the system is a method on AiderControl, run(), where you trigger the method to generate the prompt and edit the code:
/**
Control your generation by editing the commands below.
*/
func run() {
runDeckGeneratorViewBuilder()
}The builder puts the pieces together and calls the pipeline runner:
func runDeckGeneratorViewBuilder() {
let viewBuilder = NewViewBuilder(newView: .deckGeneratorViewFeatureSpec())
promptPipelineRunner.inference(
using: viewBuilder,
with: viewBuilder
)
}As an example, the following view specification:
static func deckGeneratorViewFeatureSpec() -> MVVM.ViewSpecification {
.init(
viewName: "DeckGeneratorView",
viewFolderPath: "\(AiderControl.Constants.appModuleRoot)Views/",
models: [
.init(
variableName: "generatedQuestions",
modelType: "Question",
modelPath: "\(AiderControl.Constants.appModuleRoot)Domain/Question.swift",
isCollection: true
)
]
)
}produces this prompt:
Using CollectionOfItemsView as an example, make a new view DeckGeneratorView in AppGenAISwiftUIStarter/AppGenAISwiftUIStarter/Views/ that is initialised with variables: generatedQuestions = Question. Using CollectionOfItemsViewViewModel as an example, make a DeckGeneratorViewViewModel in the same file as DeckGeneratorView This is a first draft, keep the solution simple. IMPORTANT: implement the solution without asking any questions
The prompt references an example view CollectionOfItemsView which is part of the MVVM package, and copies the example approach to generate the new view.
The generated code looks like this:
import SwiftUI
struct DeckGeneratorView: View {
@State var viewModel: ViewModel
var body: some View {
DeckGeneratorContentView(
questions: viewModel.questions,
viewActionOne: viewModel.viewActionOne
)
}
}
struct DeckGeneratorContentView: View {
let questions: [Question]
let viewActionOne: () -> Void
var body: some View {
VStack {
Button(action: viewActionOne) {
Text("View Action 1")
}
ForEach(questions) { question in
Text(question.question)
}
}
}
}
extension DeckGeneratorView {
@Observable
class ViewModel {
var questions: [Question]
init(questions: [Question]) {
self.questions = questions
}
func viewActionOne() {
// Placeholder for view action. Update as needed
if let firstQuestion = questions.first {
questions[0] = Question(id: UUID(), question: firstQuestion.question + "!", answer: firstQuestion.answer)
}
}
}
}At this point, I think it’s fair to say the generated code is not going to change the world. It’s a pretty basic example of AI generated code. However, the purpose of this post is to simply demonstrate using a simple example how the system functions. We have taken a basic set of requirements with an MVVM pattern builder to generate a View & View Model structure. We have separated the product requirements from the engineering requirements and encoded both of these in a format that we can use repeatedly to generate code as the project evolves. We could use the MVVM prompts to add more features, or we could use the product requirement to re-generate the code with a different architecture. Even though these are first small steps, the hope is that over time the set of design patterns encoded in this way will become a useful resource for teams looking to incorporate Gen AI tools in a way that is tightly controlled and clearly defines the engineering expectations for the team.