Some weeks hit different. This one smacked me in the face, (in a good way,) with just how deep the AI rabbit hole goes.
Somewhere between the courses I took and stumbling across a new YouTube obsession, something clicked. A switch flipped on in my head.
I ended up on a side adventure during my weekly learning block, looking for advanced prompting techniques. I came back with a total of 22.
I’m planning to use that list to build out a short-form video series later, but for now, I’m walking through them here. The plan is to lay out half of them now and the other half next week so I can internalize the concepts more effectively.
I’m still miles away from where I want to be. But if I’m going to eat this elephant, there’s only one way to do it. One bite at a time.
This post was shaped using ChatGPT to help organize and clarify ideas, but the thoughts, learnings, and final draft reflect hours of real human effort.
AI Tools and Courses I Tried This Week
I completed the How to Research and Write Using Generative AI Tools course on LinkedIn Learning this week. It was a part of the broader ‘Develop Your Prompt Engineering Skills’ path, but this one stood out from the others.
It made me stop and rethink how I write prompts, yet again. One idea that hit harder than the rest was to give the AI as much input as you’d give a human.
That line stuck. I’ve already been treating ChatGPT like a partner, but I’m still finding myself using simple prompts to start a thread, then refining it as I go.

Even with the prompt builder I created for this exact thing, I’m still in the habit of tossing out quick one-liners. This is exactly why I’m hammering these beginner courses and reinforcing everything.
Speaking of, I used my prompt builder to help with planning updates to my portfolio and blog. I needed a breakdown of what to tackle first and how to align everything under a clearer personal brand. Even though I built the tool, I’m still finding new ways to get value out of it.
Out of everything I touched this week, the How to Research course delivered the most value. No hype. Just practical advice from someone passionate about AI that sparked my curiosity and fed my desire to learn more.
Also, I stumbled on Nick Saraev’s YouTube channel. That guy is incredibly intelligent and I’ve already found myself watching his $2.4M of Prompt Engineering Hacks in 53 Mins (GPT, Claude) several times.
There’s so much value packed into it that I feel like I need a few more listens before I can absorb it.
Mapping My AI Learning Curve
There were a few techniques that stuck with me this week, even if I can’t remember where I picked them up specifically. Phrases like “Take a deep breath” or “Go step by step” were a definite mental level up.
I wouldn’t have thought to feed a language model calming instructions, but it makes sense when you consider how they’re trained. Those patterns probably trigger more thoughtful or structured outputs.

One trick I started using immediately after hearing it is asking the model to critique its own output before making revisions. This one came from Lenny’s Podcast on YouTube in his video with Sander Schulhoff around the 28:30 mark. (Another must-watch.)
The idea is to tell the model to examine its previous output and offer constructive criticisms. Then you tell it to implement those changes. You can do this as many times as you like, but for some reason it starts to break down after 3-4.
There was also a simple acronym I came across this week that I want to remember:
CREATE — Character, Request, Examples, Adjustments, Type of Output, Extras.
It’s a good summary for building stronger prompts and it aligns with everything else I’ve learned so far:
- Character – Set the role the AI should play.
- Request – Say exactly what you want.
- Examples – Show a few samples if you can.
- Adjustments – Add tone, style, or constraints.
- Type of Output – Tell it how to deliver the result.
- Extras – Ask for critique, revisions, or alternatives.
Small thing with big value. Having a mental framework like this will hopefully keep me from jumping straight into lazy prompts going forward. Next time I’m at the store, I’m going to pick up some sticky notes to put it on my monitor here at the house.
AI Terms/Definitions
I’ve been adding new terms to my glossary every week to help lock these ideas into place. As always, these aren’t dictionary-perfect. They’re just how I understand them right now, based on what I’ve seen and read this week.

Here’s the first half of the advanced techniques I found, with some quick example prompts-
1. Self-Consistency Prompting
Rather than taking the model’s first answer, you ask it to generate multiple responses and then choose the most consistent or logical one. This taps into the model’s probabilistic nature and improves reliability for reasoning-heavy tasks.
Prompt: “Generate five different answers to this logic puzzle and choose the one that makes the most sense.”
2. Tree of Thought Prompting
This technique prompts the model to explore multiple reasoning paths (like branches on a tree) before settling on a final answer. It’s ideal for complex problem-solving where linear thinking might miss better alternatives.
Prompt: “List three different ways to approach solving homelessness in urban areas, then pick the most practical one and explain why.”
3. Role-Playing Prompting
You ask the model to pretend. “Act as a lawyer,” “act as a coach,” etc. Works well when you want tone or perspective baked into the response.
Prompt: “Act as a UX designer. Walk me through improving the checkout experience on an e-commerce site.”
4. Chain-of-Thought Prompting
You tell the model to walk through its reasoning step by step. It slows the output down in a good way and gives you a peek inside the “why.”
Prompt: “What’s the square root of 144? Walk through the steps out loud.”
5. Zero-shot Chain-of-Thought
Same idea as above, but you don’t give it examples first. You just say something like, “Let’s think this through,” and the model starts reasoning step-by-step.
Prompt: “Let’s think through how credit card interest compounds over time.”
6. Self-Ask Prompting
You ask the model a big, hairy question, and it breaks it into smaller ones and answers each before pulling it all together. Like it’s interviewing itself.
Prompt: “Why is the housing market so expensive? Break it into smaller questions and answer each before giving the final answer.”

7. Least-to-Most Prompting
Start with the easiest part of a task, then build up to the hard stuff. Helps avoid tripping on complexity too early.
Prompt: “Start by explaining what a browser is. Then build up to explaining how web servers and back-end frameworks interact.”
8. Prompt Chaining
The output from one prompt becomes the input for the next. Like stacking LEGO bricks until you’ve built a whole process. To me, it feels like just having a conversation with the model.
Prompt (1): “Write a tweet about the benefits of using AI for small businesses.”
Prompt (2): “Turn that tweet into a short LinkedIn post with a friendly tone.”
9. Generated Knowledge Prompting
You ask the model to generate a quick background summary first, then use that info to answer the real question. It’s basically priming the model with its own notes.
Prompt: “Summarize the pros and cons of remote work, then recommend a hybrid policy for a team of ten.”
10. Retrieval-Augmented Generation (RAG)
You give the model external sources to pull from. Instead of relying on training data alone, it uses real docs to ground its answers. Less hallucination, more “here’s the receipt.”
Prompt: “Using the uploaded case study, write a one-paragraph summary focused on ROI.”
11. CREATE Prompting
A handy framework for writing better prompts: Character, Request, Examples, Adjustments, Type of Output, Extras.
Prompt: “Act as a friendly recruiter (Character). Give feedback on my resume (Request). Here are three bullet points (Examples). Be concise and constructive (Adjustments). Format it as a numbered list (Type of Output). Include at least one suggestion I didn’t mention (Extras).”
Top AI Voices to Follow
The rabbit holes are still alive and well, but they’re getting a lot more intentional. I’m steering my scroll time toward creators who actually move the needle instead of just adding noise.

Here are the names that stuck with me this week:
Nick Saraev
His 53-minute video on prompt engineering hacks wasn’t just useful, it was dense in the best possible way. Like I said earlier, I’ve rewatched it multiple times, and each time I catch something new. I think it’s earned a permanent spot on my AI learning playlist.
Lenny’s Podcast (feat. Sander Schulhoff)
The episode with Sander from Learn Prompting hit hard around the 28-minute mark, but it’s full of gems the whole video. They walked through a method where you ask the model to critique its own output before making changes. I’ve already used it a dozen times, and it’s working like a charm.
Y Combinator (YouTube)
I just found this channel today, a few hours before writing this post. It looks like a great mixed bag of content, but the one that got my attention was their State-Of-The-Art Prompting For AI Agents video. I haven’t finished it yet, but I think it’s promising.
Most likely, this is what I’m listening to in my car on the way to work tomorrow.
I also followed several of the course instructors I’ve been learning from on LinkedIn. I’m serious about this stuff and want to make sure I’m getting signal from the algorithm and not just noise.
Next Steps in My AI Journey
Prompt engineering is still my main focus, and I’ll be covering the other half of the techniques I gathered next week. But my learning block is going to shift a bit. I’m diving into Adobe’s “Essential Skills in Generative AI for Creatives” path on LinkedIn Learning.
It’s a 7-hour set focused on AI tools inside Creative Cloud, and it feels like the perfect companion to what I’m already building as a digital marketing student.
But the real itch I need to scratch is Python.
I’ve never written a single line of code, but it’s becoming painfully clear that Python isn’t optional if I want to go deeper into AI. Every time I hear about agents or automation, it comes back to Python. It’s the backbone of most of the tools I’m interested in building.

So, while I’m still focused on prompt engineering and foundational marketing work, I’m carving out time to start tackling Python and agent workflows over the coming weeks. I don’t expect to be coding full tools anytime soon, but I do expect to get my hands dirty and start moving.
The faster I learn it, the faster I can stack these skills and get closer to achieving my real goal: giving my kids a better life by turning this obsession into opportunity.
Closing the Loop
This week added real weight to what I’m building. Tools, techniques, frameworks… all of it’s starting to stack. And I’m finally seeing the bigger pictures of how this AI stuff connects.
Next week, it’s more techniques, more experimentation, and laying more bricks with the Adobe course. I’ve been meaning to play with Photoshop some more, so I’m really looking forward to my next learning path.
The game plan is simple. Keep showing up, keep learning, and keep taking bites of this elephant until I’ve got it all down. Maybe we can learn together.
If you’ve made it this far, you’re probably on a similar path. Subscribe if you haven’t. Connect with me on LinkedIn or Twitter/X too. I’d love to talk about this stuff with you.

Are you chasing mastery or just tinkering? There’s nothing wrong with dabbling, but I’ve hit the point where I need results. What about you? What’s the skill or project you’re betting on to move you forward?
Social Links:
- LinkedIn: https://www.linkedin.com/in/donaldkell/
- Twitter/X: https://twitter.com/FutureproofDK


One response to “Futureproof Log 5 – Stack Now, Shine Later”
[…] They’re just how I understand them right now, based on what I’ve seen and read so far in my journey. For the first half, check out last week’s blog post. […]
LikeLike