The AI future is coming…or is it?
The headlines around the AI Age can give you whiplash. It’s changing everything. it’s destroying everything. It’s leading CEOs to lay off thousands of workers and/or freeze hiring of thousands more. It’s actually ailing in the field, where AI initiatives in real companies malinger and stagnate. (And require humans to figure out and mitigate how those initiatives have failed.)
Can we just admit that, really, we don’t know? I know that’s not what pundits and leaders and “futurists” are supposed to do, but it might be more fun than just being wrong all the time!
It seems like a lot of the folks predicting the AI future are painting a picture of the future based on what they hope to be true. I’m going to put the least stock in the predictions coming from the CEOs and VCs who have a vested interest in people “obeying in advance” to their vision of the AI future. (Yes, I’m specifically talking about people like Sam Altman and Marc Andreeson.)
But, Elisa, you may ask: Why put the least stock in such people? Aren’t they titans of industry and closely watching every micro- and macro-movement in the roadmap and market for AI?
I mean, sure they are, but if we’re going to constantly reference how 90+% of start-ups (including venture backed start-ups) FAIL, why do we also think that investors are so good at predicting things? Surely, they don’t set out to invest in failures.
And if the response is that they take big swings and big risks, and that they only need 10% of their bets to pay off, I would agree that’s also true, but argue that being people who can AFFORD to have only a 10% success rate on their financial bets doesn’t actually make them the people to make accurate predictions on socio-cultural developments.
And the future of AI is tied closely to the future of culture and society, because the path to adoption and eventual ubiquity is littered with examples of better technologies, candidates, products, etc. that didn’t cross that chasm. There are a lot of excellent TV shows that didn’t get renewed for a second season. (I mean, how did they cancel My So Called Life, amirite?)
How’s it going in the real world?
A lot of AI proponents talk about how it will free us up from tedium to be our most creative and productive selves. Somehow I doubt that executing lower level analysis or productivity or customer support tasks is where the big money is. It feels like the big head of the Wizard in Wizard of Oz…look over here at how we’re going to make your life freer from trivial tasks you hate (or freer from the lower level employees you resent having to pay and care about). Just don’t look at the man behind the curtain leveraging data and questionable ethics.
Even if liberating we peons was the goal, my recent experiences haven’t felt liberating; they’ve felt frustrating.
Example #1: I tried Boardy, this AI designed to help you connect with relevant people for potential mutual benefit. Fascinating experience. It entailed a 30-minute phone call with Boardy’s AI…voiced to be an Australian man, and absolutely disclosed as an AI not a human. it asked me questions and then engaged in active listening: “So, what I hear you saying is x?” I really liked the way it parroted back to me my answers. It made tweaks and tightened the language. It was so good that I asked for a transcript of our conversation, so I might review and leverage some of the language. Sure, Boardy told me, I’ll send you that.
I asked three more times before it admitted it didn’t capture transcripts and couldn’t send me one after all. What in the what?
Example #2: I’ve been building a new community space on a leading platform for that purpose. They have a really useful AI feature, an AI supportbot for your dedicated help. Early in the process it was super helpful. I asked simple questions, and it answered immediately, specifically, and in plain English. Certainly quicker than it would have been to wait for human support or search through their support web site.
But now I need slightly more complicated help. I wouldn’t mind if it simply said, I can’t help you, here’s the Guide on that, or here’s the way to get human help.
Instead, it gave me instructions on where to go to execute what I wanted, and three times it sent me somewhere and I did not see the command or link it was telling me, oh so confidently, to click.
Finally, it confessed that the product doesn’t actually do what I was looking to do.
This supportbot was purpose built to support this platform and this platform alone. That’s its only job. It was obviously trained by humans, but apparently wasn’t trained to simply say, no, you can’t do that. In this frustrating exchange I must have eventually used a different word or two to describe what I was trying to do and that finally triggered it to return the response, oh, that?!?! We don’t do that!!!
This is the thing we can’t forget about these tools…they are trained by humans to do things, but even if they have access to the full dictionary of human language, they cannot yet extract context well enough.
All I could think is, so this is what people say will take over the world???
I mean, don’t you have a story or two like this? [Let me know!]
We’ve seen how this goes (i.e. how are you feeling about social media these days?)
All I keep thinking about is the early days of social media. I’ve said before that I was a digital utopian. I saw so much promise in how social media could break down barriers, democratize access, raise up marginalized voices, and so much more. But the combination of lack of governmental understanding, let along action, and the driver of unfettered capitalism spelled doom for my (and a lot of other people’s) utopian visions. I don’t feel like I’m exaggerating when I say that social media instead has contributed to the decline of society, democracy, and our brains. It’s also instructive to point out that AI has been in use in the world of social media and e-commerce etc. for a long time. From dynamic feed algorithms to Amazon recommendations, you’ve been AI’ed for years.
With the introduction of generative and now agentic AI, AI has the capacity to do as much and more. Among the real world experiences of using AI that concern me the most is its tendency to people please and confidently deliver opinion as fact. (The end result of social media, now that I think of it, is that we have more trouble than ever distinguishing those two things.)
My colleague was telling me about a prompt she gave ChatGPT as a thought starter. She purposely gave it a high-level prompt, not wildly engineered to deliver a very specific actionable response. She asked it to compare and contrast two words. These are words that have dictionary definitions that are neutral. One is more of an adjective, one a noun, but they are words that evoke a similar quality. By asking ChatGPT to consider one vs. the other, she received a response that was an utterly subjective take that cast one word as a positive attribute, the other negative.The discomforting thing was that the response was presented as simple fact. To use media parlance, it presented its hot take as *news,* when it was entirely *op-ed*.
Of course, my first problem is always going to be that the response was likely cribbed from some existing “thought leader” or author without attribution or citation by default. My second problem is that this only serves the ongoing issue we have in our media landscape (both traditional and social media) that we are being lulled into thinking opinion is fact. That positioning is truth (rather than pontification).
Jory, as a sophisticated tech user and true wordsmith, will leverage what is useful to her and leave the rest. I’m not sure the rest of us are equipped to do so!
Does this bother anyone else the way it bothers me?
Let’s face it: We do not know what the AI future will look like. And it is not written in stone!
In case you’re feeling disheartened I’ll end with a word about your power.
Currently I’m working with a client that is going to bring AI capabilities to bear to help regular people be more effective everyday activists. Helping people to accomplish that has been on mission for me since co-authoring Road Map for Revolutionaries. The AI will help craft messages but also use a feedback loop with the various distribution tools for said messages, to continuously improve the effectiveness of those messages…individualized for each user’s own audience and community.
Core to this start-up’s philosophy is that effective everyday activism should be based in truth not propaganda, so there will be a very significant stake put in the ground about curating the content resources that provide the data and facts used to bolster messaging propagated by the platform. This is a choice.
At the end of the day, people do have power. We face choices every day, and there is power in that choice. We have power in what tools we choose to use, how we use them, what we share, who we choose to work for, who we vote for, and so much more.
If you doubt people power, you need only see how quickly Jimmy Kimmel was put back on the air despite two of the largest syndicates telling Disney/ABC they still wouldn’t show his show. Why? Because a million people canceled their streaming service. They put him back on air, and those two syndicates eventually did too.
AI is here, on that I think we all agree. But we still have the time and the power and the obligation to shape how it is here. From the ways we use it to the ways we support or don’t support how others use it. And anyone who tells you they know what that’s going to look like is selling something. Probably literally.
Do I have my finger on the pulse or my head in the sand? Let me know :)
Join us for our next Optionality event
Optionality’s October 15 member webinar will dig deep on your money mindset and delve into understanding your own money story (dating back to childhood for many of us). And once you understand your money story, what do you do with that insight? Emily Scott focuses on the human side of money in her work, and this interactive session will help us acknowledge and tend to that human side.

We’re getting ready to launch a new community platform for Optionality, so subscribe now if you want to be kept in the loop. (Although I’m sure I’ll announce it here when we move.)
That’s it for today. Until next time, please leave a comment and let me know your thoughts on any or all of the above. This is basically my blog now! And as always, I appreciate a share of Optionality and this newsletter.
Thanks for reading This Week-ish with ElisaCP! This post is public so feel free to share it.
