Profile imageDul Zorigoo

AI Terminal

I read an interesting pair of essays earlier last year called Malleable software in the age of LLMs by Geoffrey Litt and The future of software, the end of apps by Paul Chiusano. In these essays the authors talks about software walled off through man-made restrictions and what could be possible if we had the ability to change and configure software ourselves, highlighting AI prompters as an interesting medium of configuration.

There's much more mentioned in these essays and I highly recomment giving them a read. I thought about how these prompters might look like and how they would work and created a couple of mobile screens along with some of my own thoughts.

The AI Terminal

I like the idea of calling this an AI terminal. As the name helps to encompasses all the functionality we'd like to place within that prompter. The capabilities to configure software, create one-off GUI's, create scripts and etc...

How do you access it?

There's a couple of possible ways of accessing the AI Terminal. Through settings, Floating Action Button, Navigation button, Core function integration and etc... Here's how they would look.

Settings

Settings

Settings is the best place for your software to place configuration settings. So it's a no brainer that it could live within settings. One I don't like about this is that, placing the AI termninal within settings inplies that the AI terminal can only be used for configuration and setting related actions. This isn't the case, you should still be able to ask questions and give commands to the AI Terminal to execute various actions.

FAB and Nav button

Floating action button (FAB) / Navigation button

Ah, the good old Floating Action Button and the Navigation button. This is another viable area to summon the Ai terminal from, but I would save this real estate for more important actions that the software uses. Actions like creating posts for social media or creating new notes for note taking apps.

Image 1

Core function integration

The one I like best is integrating the summon button into the software's core functionality. For example when you're creating a new note in a note taking app, integrating the prompter into the editor. Or adding it in into your feed. I think this strikes a nice balance between inplying that the AI prompter is a more universal action. By inplying that you can configure your software while also leaving space for the functionality of your piece of software.

Prompter

Interacting with the prompter and getting responses

Upon interacting with the prompter, in this I'm taking the example of creating a one off GUI, you will be handed a link to navigate to the GUI it generated.

As you enter the GUI the Ai terminal has provided for you, like any code editor with a running local server, you'll be able to make adjustments and see the effects of your adjustments on the fly.

Editing

I image it would be helpful to be able to see the changes you made to a certain GUI, script or configuration. Tappin on the "Edited" button, you'd be able to see a list of changes sorted by date through a changelog.

After a certain artifact (GUI, Script, Configuration and etc...) has been created, you'd be able to instantly use it in that instance or you'd be able to save it for use later on.

Saving

Depending on what you generate, as you're creating an artifact to use around your software. When you save, having the option to choose where you'd like to save it / bookmark it would super useful.

I love this concept

I really like this concept birthed by the 2 authors of the essay and I see a lot of potentiontial for it. There's already products like new Rabbit Pocket Companion and I'm sure OpenAI/developers are building something of the sort within ChatGPT.

Everything above are very surface level thoughts on the topic and I'm sure there's so much more nuances to be dug into around the concept.