How I’m Using Generative AI: January 2024
Generative AI, whether the output is visual or text, has been making waves for over a year now. It’s the biggest sea-change in the tech industry since the smart phone appeared. As with the mobile revolution, and every technological shift before that, there will be great uses and disturbing abuses with generative AI. New businesses will be formed that could not exist before and some old businesses will prove to not be fit for the times and fade in relevance. The early leaders are already trying to frighten politicians and use regulatory capture to keep competitors from climbing up the ladder behind them. What will be the end state of this shift? In my opinion, not nearly as sci-fi as or existential as the industry boosters and critics like to say. Despite their stated concerns about how AI will destroy the world, commentators have been given a huge gift of something to talk about that centers on their own interests, the production of creative work. So let the commentators talk. Generative AI is here and it is useful today. Here’s how I’m using it on a few different projects.
Career Minder
In all my design work, including Career Minder, I’m using generative AI to generate sample content. When I graduated from school and began working in design agencies we used the same techniques for mocking up layouts when the copy wasn’t written yet that people have been using for centuries. However, now I can use generative AI to give me placeholder content that is accurate and easily usable in modern design tools. Just write out a prompt like this into ChatGPT and you’ll have something really easy to work with:
Give me JSON of 20 different users with different birthdays and short professional introductions in the following format: { name: "First Last", birthday: "January 2, 2023", introduction: "A very industrious person" }
In Career Minder, we’re also using embeddings to match content appropriately in ways that could not have been done before. We’re beginning to work now on using prompt engineering for future product features. Additionally, we’ve used generative AI to build the content for Harriet, our example magical woodworking demo account user. This has included a full professional experience for an imaginary person in a magical kingdom, as well as headshots and imagery for use in marketing materials.
Additionally, generative AI is really good at helping with coding problems. For example, I wanted to build a Cloudflare Worker recently that would perform a task on a recurring basis. Within moments I was able to have serviceable boilerplate code. While I wouldn’t want to turn the development of Career Minder over to generative AI, it can certainly shorten the “encounter problem > search the internet > find a Stack Overflow article that kind of answers the question > try it out > realize that your problem isn’t quite the same > repeat”
cycle that most developers encounter on a regular basis.
Shining Path Book
In the book on Church history and the Shining Path that I’m writing, I’m using LLMs and other AI tools in my research. Here you may concerned. Don’t LLMs hallucinate and just make stuff up? Yes, they do. However, if you know that going in, you can be more deliberate about how you use them. I’ll rank three things I use AI for in order of least to most reliable.
Least reliable
Just ask the LLM a question about the subject. For example:
Tell me about the massacre in Ocros, Peru in the early 80s.
Now this is a pretty obscure event in a little written about civil conflict from around 40 years ago in a part of the world ignored by most academics and journalists. Most of the sources where it could be written about are in Spanish and in academic journals that may not have been in the training data set for most LLMs. So the answer I would get back from a question to an LLM would be pretty brief. If I push the LLM for more information I would not be able to trust anything it says, because it is likely making it up. That’s what it was built to do.
However, if I want to know more general information about a subject that is pertinent to what I am writing I could ask something like:
Tell me about the land reform movement in Latin America in the mid-twentieth century, including its causes and results.
The results would give me a serviceable answer and give me hints about where I can look for further sources to dig in deeper as needed. Questions like these are excellent for when you are reading academic writing and want to understand jargon and references that could lead you down a rabbit hole away from your main subject.
Tools I use for this: ChatGPT, I do use ChatGPT 4, but 3.5 is good too.
Pretty Reliable
Let’s say I have a source that is huge and dense with information. I’m looking for something specific, but don’t know where to start. Peru commissioned a Truth and Reconciliation Report in the early 2000s after the Shining Path was defeated. The resulting report is huge with a lot to sift through. One thing generative AI allows me to do is upload sections of the report, in Spanish, and then ask it questions about it. For instance I might ask:
Does this PDF mention the killings of Catholic priests?
From there I can narrow down where I need to read and learn more.
Tools I use for this: Claude by Anthropic. It’s similar to ChatGPT, but has a much higher token limit, which is sometimes necessary for larger texts.
Very Reliable
I often have interviews that would be much easier to work with in text form. For this I use OpenAI’s Whisper to transcribe audio locally on my computer. I use it via a great tool for the Mac called MacWhisper, which allows me to edit the transcription, assign speakers to different parts, and then output in a variety of formats. using the largest Whisper model I can transcribe Spanish, as well as English. Is it 100% accurate? No. However, it is very good and I find that listening through the audio once and correcting the occasional issue as I go means that transcribing a two hour interview can be done a single morning or afternoon. At various times throughout my career I’ve had to transcribe interviews from scratch for user research. A process which could eat up a lot of time. Now, things are much faster.
Tools I use for this: Whisper & MacWhisper.
Privacy & Working Locally
Due to the subject matter of the book I’m working on, privacy concerns are common. When working on an interview that is not in the public domain, the last thing I want to do is upload it to a service where it could be used to train models or someday end up as a response to someone’s prompt. However, there are models that you can run on your local machine, like Meta’s open source Llama 2. The benefit of this is that you can have the capability of generative AI in a private way, without risk of uploading sensitive information elsewhere.
If you’re comfortable using developer tools, you can open your terminal and get these things running quickly. I prefer to use applications that give me a chat interface and other tools with these downloadable models. The app I use to run models locally on my Mac is called Faraday, which gives you the familiar chat interface, allows you to utilize different models that may be fine-tuned to different tasks, and even allows you to create personas that provide the models with context about the types of responses you want. If you want to run image generation locally on your Mac with open sourced models you can use DiffusionBee.
Going Forward
These tools and technologies are changing rapidly. I’m sure there are things out there now that would blow me socks off if I knew about them, but it’s hard to keep up. It’s also hard to remain grounded in what is real and what is techno-optimist/doomer fantasy. I’m interested in trying out the Mistral open-source models and am curious how reliable prompt engineered results can be. It’s an exciting time, and I have found it it fruitful to embrace it. Are you using generative AI in your life? How?