Futureproof Log 3 – Coursera Can Wait, I’ve Got Groceries to Buy

AI is taking over the world. Well, it’s already taken over the world and the world of marketing is playing catch up. In this blog, I’m exploring what AI means to me as a new marketing professional and educating myself in public.

Critiques, criticisms, and corrections are greatly appreciated. My goal is to increase my knowledge while providing value to others like me.

Intro

I sat down at my computer on my day off so I could do some learning for this week’s blog. It was supposed to be a quick win. I figured I’d knock out a six-hour LinkedIn Learning path, collect the shiny Microsoft certificate, and keep moving…

Until about halfway through the second module, I realized I was deep in the weeds. That 6 hour runtime was a little deceptive. (For me at least.)

There was so much more to it than just watching the videos. I’m trying to learn this stuff, so I was pausing, researching, chasing down definitions. Generally doing my best to lock in the fundamentals.

What I found was exactly what I needed. The course didn’t assume I had any kind of background in AI or talk to me like I was already supposed to get it. It laid things out in plain terms without dumbing anything down. I definitely spent way too long grinding through it in one day, but I don’t regret it.

Especially not after learning that ChatGPT has a government name.

What I Explored and Where I Found It

Last week I was hyped about those Google certifications I found through Superhumans Life’s channel. They looked like the perfect next step until I realized that they’re only free to start. Meaning they’re only free for the first week.

Unless you knock them out in seven days, they drop you behind a paywall. I don’t have the time or the funds to afford that right now. Not in this economy.

So, in between my usual YouTube and Spotify rabbit holes, I went back to LinkedIn Learning and found something I could finish.

“Career Essentials in Generative AI by Microsoft and LinkedIn.” Six hours of content, a real certificate, and over 700,000 people already enrolled?

No brainer. Sign me up.

I recognized a couple of the module instructors from Full Sail assignments, but most were new to me. The production value was high, the pacing felt tight, and best of all it was practical. No fluff, no doom spiraling. Just solid info broken down in a way that didn’t assume I was already some kind of AI engineer.

I followed every instructor on LinkedIn and bookmarked a bunch of side courses I might check out later.

What I Learned and How It Clicked

First off, generative AI isn’t human, no matter how convincingly the output gets. That got hammered home this week.

These tools might feel conversational, but they’re just predicting the next most likely word. That’s it. Stochastic parrots, not synthetic minds. It’s easy to forget when the replies are smooth, but the course reminded me that it’s just a pattern-matching math at scale.

The danger isn’t that they’re too smart. It’s that they’re confidently wrong and we don’t notice.

What did impress me was the scale of the data involved. These models don’t just learn from a few textbooks. They eat the entire internet and come back for seconds. And what we feed them matters. Biased input means biased output. Bad training data means bad answers.

That’s not just theory, it’s documented reality. I want to give some of my thoughts on AI and ethics next week, so I’ll say more about this then. But all of that is why I’m being more intentional with my prompts now.

It also reframed how I think about prompting in general. Every input is a prompt, whether it looks like one or not. A rant, a question, a vague idea, it doesn’t matter. It’s all data to the model. There’s no room for passive input. (Or sarcasm. I’m bad about that.)

I’ve stopped assuming the model knows what I mean. If I want quality, I have to define what quality is. And even better if I can give examples. I’m working on a shared GPT and a PDF that I’m hoping can train my brain on what makes a good prompt. I’ll post it on LinkedIn when I figure it out.

The fact is, if I get nonsense back from the model, that’s on me. That mental shift alone has changed how I work with AI across the board.

AI Terms/Definitions

I’ve been picking up a lot of jargon lately, and not all of it sticks the first time I hear it. So, every week I’m going to jot down a few terms that helped something click. It should help everything stick better for me, and maybe it’ll help you too.

These aren’t dictionary-perfect. They’re how I understand them right now, based on what I’ve seen and read over the course of the week.

Model: A model is the trained version of an algorithm. It’s what actually generates the outputs you interact with after all the internet eating is done.

Token: A token is a piece of a word. It’s how the AI slices up your prompt to make sense of it. Most models don’t think in full words. Instead, they think in chunks, and every chunk costs compute time and tokens.

Stochastic Parrots
This one stuck with me. It’s a way of describing large language models like ChatGPT that sound smart but are really just predicting the next most likely word based on mountains of training data. They don’t understand meaning. They just repeat patterns in convincing ways. Like parrots. But… nerdy.

If you want the deep dive, here’s the citation for the original source-

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 610–623. https://doi.org/10.1145/3442188.3445922

Zero Shot, One Shot, Two Shot
This is about how much help you give the model before asking it to perform a task.

  • Zero shot: no examples, just vibes
  • One shot: one example
  • Two shot: two examples

The more examples you give it, the better the odds it does what you want in the first couple of attempts.

Inference
This is the moment the model actually does its thing. After it’s trained, inference is when you give it an input (your prompt) and it gives you an output (its response). It’s not learning anymore at that point. It’s making its calculations and applying what it already knows.

The “Special Differentiating Factor”

What made this course different? Ironically, it respected my time. Every module had something useful, and it didn’t waste energy trying to impress me with what AI could do someday. We all know what AI could do if left unchecked.

The learning path I took this week is true to its title. It’s an “Essentials” course and I felt like I got my money’s worth without spending a dime. (Thanks Full Sail.)

The lessons were clear, the examples made sense, and I left each section knowing more than I did before. That’s all I want out of any course. Especially right now when I’m balancing a hundred different things at once and time is at a premium.

If you’re trying to get a grip on AI without getting buried in theory or buzzwords, this one actually delivers. Just maybe… don’t do it all in one sitting. Learn that much from me this week anyway.

Creators I’m Now Following

Photo by Matias Mango on Pexels.com

This week I started curating my Twitter feed with a mix of industry experts and official AI sources. I’m trying to keep the signal-to-noise ratio as high as possible.

These are the voices I’ve been tuning into this week:

Plenty of rabbit holes ahead, but this gives me a steady feed of insight without having to hunt through clickbait.

Next Steps

Next week I’m shifting the focus to ethics. Not because I’m ready to tackle the big philosophical questions, but because understanding the basics of ethical AI use feels like a necessary step before I go further.

Bias, data misuse, misinformation… none of it’s theoretical anymore. It’s already baked into the tools we’re using. I want to understand where the guardrails are supposed to be, and what responsibility looks like when I’m the one writing the prompts.

I’ll also keep working on that shared GPT and prompt strategy cheat sheet I mentioned. This AI stuff is still overwhelming and little messy, but it’s starting to click.

Conclusion

Like I said, this week was supposed to be a quick win with a Google certificate at the end. But, then it turned into a full-day deep dive that left me smarter, exhausted, and a little more focused.

The Microsoft and LinkedIn Learning course gave me a better sense of what AI is and isn’t than I had. It pulled together everything I’ve been piecing together so far into a cohesive and professional format.

I’m starting to grasp how these tools actually work under the hood. More importantly, I’m already learning how to use them better.

I came away with a stronger foundation, a growing glossary of terms to share and reinforce, and a mental shift around how I prompt and interact with AI. That’s huge.

It also gave me clarity on where to go next. Ethics isn’t a side topic when it comes to AI. It’s the next thing I have to understand, and it deserves its own deep dive.

So no, I didn’t hit Coursera or knock out those Google certificates from last week. And that’s fine. I feel good about what I did learn and the certificate I did earn.

Every step forward is momentum after all. Even when it feels like my feet are dragging.

Generated image

What’s one AI tool, feature, or idea that made you stop and reevaluate how you do things? Have your thoughts on AI shifted recently, or are you still in the same spot you were last month? I’d love to compare notes.


Discover more from The Futureproof Directive

Subscribe to get the latest posts sent to your email.

By:


Leave a comment