AI Reflecting Its Creator
A recent foray into Asana got me thinking about AI as a common trope: the genius who cannot tie their shoes.
I spent some time recently upgrading systems in my favorite project management software. Asana is now pushing use of its AI Studio hard, and I was game to see what it could do. After 4 hours, I’d had about a 50% success rate in finding automation rules that both actually functioned and added value to our work…and I’d used up my organization’s AI Studio credits for the month.
It was very much a moment of “the soup is bland, and there isn’t enough of it.”
Two days later, I caught up with Dario Amodei’s latest essay on the future of AI, in which he posits the rise of a “country of geniuses in a data center.” The essay is very long, and paints vision of the future that careens between inspiring and terrifying. It also felt very out of place after I’d spent an afternoon trying, and failing, to get Claude to follow rules based on simple “if, then” conditions.
The contrast of Amodei’s vision with my personal experience made me think back to my grad school days, and a professor who would urge us to ask, “What if both these things are true?” And in this moment, at least, that appears to be the case.
Just this week, a team of philosophers and computer scientists argued in an article in Nature that we should stop pfaffing about and admit that artificial general intelligence (AGI) has been achieved. They cite these models’ measurable accomplishments across academic fields and undeniable effects (positive and negative) in people’s personal lives—and they went to press before they could even write about MoltBot, even!
Yet simultaneously, one of those leading models cannot follow the instructions a kindergarten could easily carry out. And in late January, another a study came out, from an AI consulting firm (Section), no less, noting that about two-thirds of knowledge workers say they are saving at most 4 hours a week by using AI. Less than 6% of workers think they are saving more than 12 hours a week.
And so, at the risk of anthropomorphizing large language models even further than they have been in the last week, I began thinking about that old trope: The absent-minded professor. The genius who cannot tie their own shoes.
Maybe for the medium term, or maybe forever (as Amodei admits in his essay, even the head of Anthropic cannot fully predict the capabilities AI will reach), this is the AI will know. Programs that show remarkable efficiency and capability alongside astounding weaknesses.
We had a writer for TechArena this week who noted in the wake of MoltBot, “our urge to be a creator as part of human Imago Dei is a thread throughout human history.”
And what is more fitting, for humans, than to create something as imperfect as we are?