Every time I use technology lately, I get another “try this new AI tool” message.
Every single time.
Maybe it is because I have added AI work to my ever-expanding and eclectic resume (including on LinkedIn). (A terrifying thought considering what it implies about my data, but that is the world we live in). Or maybe everyone in the world is simply jumping on the AI bandwagon without having an actual plan.
It’s like the world has discovered a fun new toy, and everyone wants to play with it.
I’m all for playing, of course. We learn, grow, and discover as we play, fail, try something new, and play some more.
And yet . . . even the most simple games of our childhood have some intrinsic rules, even if we made them up as we went along. They shifted and morphed as we discovered new aspects of play—but the best games were always the ones where all the players understood the rules. If one person decided to take over and shift the rules on a whim, players would disappear, or run home crying.
Somehow, as we played house or pirates, or tag, or whatever, we all came to understand a few key facts:
How the game began.
How the game ended.
How to stay safe within the game (unless the intention of the game was cruelty like dodgeball).
Enter the game of AI, where everyone is racing to be the first to discover or create the most useful application of the technology. A game where we take risks, explore options, and challenge new players daily.
Sounds fun, until something breaks or someone gets hurt.
Okay, I’m being a tad dramatic here, but I am also frustrated.
For you see, when we leap willy-nilly into the fray, things get missed. Human things. Sure, AI technology can organize, strategize, and make a plan. All cool stuff! But, in our hurry to race into the game, it seems people are often forgetting some really important rules, and neglecting to ask important questions.
Human issues, such as:
Who understands the systems enough to be able to problem solve, AND explain the problem clearly to others?
What is the organizational system that allows for addressing human concerns and issues promptly?
How do new rules get shared so that everyone understands?
Who ensures that nobody’s time is wasted, and everyone has an equal part to play in the game?
How do we ensure people get paid fairly and treated with respect?
And perhaps most importantly, how do we ensure the safety of all involved—which includes not letting the human role fall through the cracks?
This week, in the AI game I am playing I have been . . .
bounced from one team to another,
encouraged to fix work that I cannot access,
questioned as to why I did not edit a task when I never actually saw the suggested edits,
trained on two new (although related) tasks,
and, this morning, I attempted to do the newest task, which requires submitting on two different platforms (urgh!) only to discover some technical glitch that is making it impossible for me to move forward.
Now, I guess I am in timeout. I’m on the bench, and I can’t get paid until I join the game again. Nobody placed me there, it is simply me waiting for something to get fixed or explained. I can’t play unless someone gives me more clarity, or solves my problem.
That, however, requires human contact and communication. A help ticket only works, if someone responds. A question only gets answered, if someone responds (and knows the answer).
AI will get better and solve problems quicker as it continues to grow. And yet, the reality is, that we need flexible, creative, considerate human interactions in the face of a world where technology intervenes in almost everything we do.
At least that’s what I think. What are your thoughts?
Sorry to hear about the moment of stuck.