Given what’s been happening over the last two weeks, everybody in my tech circle seemed to be discussing the whole OpenAI drama and Brockman’s and Altman’s next possible moves. Some were claiming OAI’s board may have discovered dangerous developments and safety concerns regarding AGI, some were applauding (apparently) perfectly orchestrated coup, where the ultimate winner was supposed to be MSFT.
One of acquaintances commented on this, by quoting Roosevelt:
Great minds discuss ideas; average minds discuss events; small minds discuss people.Eleanor Roosevelt
…or arrogance, depends how you see it.
I’ve been wanting to understand better the whole concept of future AI and transhumanism (still feeling like I know 1%), and so I’ve been reading really brilliant articles from Azeem Ahzaar on AI future, The New York Times on OAI board’s lost battle against capitalism, another NYT’s one on social responsibility of business, The Algorithmic Bridge and the Vanity Fair article on techno-oligarchs.
I read those in my rare moments of deep work, followed by intermittent periods of slacking; checking inbox, to numbingly open it again a couple of minutes later, messaging and reading adding-nothing-to-my-life news. I admit I’m enslaved by the Instant Gratification Monkey. And if my weaknesses are what differentiates my humanness from a machine, then let be it. Soon we may be the minority 😉.
I guess my mental cap is nearly maxed out, so I’ve been debating whether to allow myself a little more slack, and for this weeks’ #hankka episode to write on more universal, easier to comprehend truths a.ka my usual psychological leadership-themed effusions… but they say you need to strike while the iron is hot, so here I am, adding my 5 cents to the whole AI is coming, we’re doomed debate, that no one asked for.
Is it “just” yesterday’s news?
Whether OpenAI with their Q* indeed achieved a breakthrough, I think it’s not AGI. It will be long before we can get our hands on it (don’t believe media blown sensational headlines).
As Yann LeCun, VP & Chief AI Scientist at Meta states, one of the main challenges to improve LLM reliability is to replace auto-regressive token prediction with planning. Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published ideas and results. It is likely that Q* is OpenAI attempts at planning. And, I think my acquaintance Virginia Dignum, who recently commented on the whole OAI drama, has made a point on something much graver – distracting attention from something we sort out first:
“I don’t have much to say about the current developments around Sam Altman and OpenAI except that what is basically an internal company issue has been disproportionally blown up to the scale of a problem for humanity (as claimed by e.g. Gary Marcus in his X posts).
The media has provided a platform to these people that want so badly to believe they are just a step way from creating thinking, conscious or whatever, entities they we have all lost perspective on what actually ChatGPT and similar actually are: text synthetizing tools, predicting the next word in a sequence from a huge corpus of data (often obtained by doubtful means). Most worrying is that these same people are influencing policy decisions at very high level.”
Time will tell.
You can’t think clearly in panic mode
There seem to be two camps out there in the AI field; those companies who contribute to open-source (minority) and those, who started with such narrative, but then turned into secretive development, feeding public opinion with fear around AI, which is of in interest of people with power, not an everyday Joe.
This fuels misinformation and sends researchers their resources in the wrong directions and puts pressure on legislators to over-regulate, removing further already narrowing opportunities for small tech players to innovate and represent interests of wider population. Overregulation hinders progress for SMEs, that can’t afford costly legal counselling, nor potential fines from unintentional data misuse.
During AI Summit happened earlier this month, UK’s Prime Minister Rishi Sunak warned large companies could not be left to “mark their own homework”, against a backdrop of concerns about the technology’s potential capabilities. The UK government has promised to bring in regulation forcing AI firms to report before they train models over a certain capability threshold, with independent oversight on testing, acting as a “referee” when needed. It would be great if we could get the government to help enforce more transparency and openness, since it won’t happen voluntarily.
Discussion needs to be happening among more of us than ever before, including AI experts and developers; tech journalists and policymakers; legislators and civil society; consumers and creators, to make sure that we design systems with everyone’s interest to prevent any potential bias.
To serve and to protect
The idea of creating an AI superintelligence is a big part of the Longtermism (a topic for a separate post – maybe next week), which some call the Scientology of Silicon Valley. Some people in the tech world do seem to believe that artificial general intelligence could ultimately dominate and destroy us all. As Survival of the Richest author Doug Rushkoff told The Guardian in May:
They’re afraid that their little AIs are going to come for them. They’re apocalyptic, and so existential, because they have no connection to real life and how things work. They’re afraid the AIs are going to be as mean to them as they’ve been to us.Doug Rushkoff
Ultimately, it doesn’t really matter what those tech leaders believe. When too much power is held by too few agents, and when ideas are weaponised, motives and actions matter more than what people think of the weapons they’re wielding. And when it comes to the damage that powerful people want to cause, lofty justifications usually follow existing projects or desires.
So we may as well be witnessing yet another case of tech’y religion, that’s been built around certain pursuits, where the general public is fed with the narrative of an evil, inevitable, future god, in the form of AGI, that tech leaders supposedly want to protect us from.
Ps. And since they may not have our best interests in mind, we should use our voice to raise concerns about real problems. We don’t want to end up living in a an alternate, autocratic nightmare reality created by the mighty few.