The hardest part about predicting what the future holds for us when it comes to AI is the sheer brainpower of the individuals who hold polar opposite views.
One day, I'll listen to a podcast with a super-intelligent, charismatic expert in the field of AI who tells me the future is filled with rainbows, disease elimination, job creation, and prosperity for all.
I'll take a sigh of relief and believe them.
Then, the next day, I'll listen to an equally intelligent, charismatic expert in the field of AI who tells me the future consists of us being harvested for our molecules when the AIs we've worked so hard to create rip up every part of our Earth, turning the entire planet into one giant GPU.
I'll let out a fearful gasp and believe them.
Then, I'll go away and think about it some more and realise that, as is usually the case, some nuance is required, and the truth is most likely to be somewhere in the middle.
In my opinion, AI, as has been the case for virtually every technological advancement since the invention of agriculture, will create some amazing future realities for some subset of people while simultaneously making life significantly worse for another subset of people.
The industrial revolution was pretty cool if you were a factory owner, but not so great if you were unlucky enough to find yourself sent to the Workhouse.
The invention of the plane allowed individuals with enough resources to travel to virtually any place worldwide within a day.
However, this development wasn't so great for those who found themselves seeking refuge in bomb shelters as the same planes that were once symbols of marvel became instruments of warfare and devastation.
I could give similar examples of how cars have been great for transporting goods and people, but are also a top ten killer of people. Or give less death-y examples of how social media is great for keeping in contact with people across the globe, but overuse can be detrimental to mental health and increase isolation.
HOWEVER,
In every great leap forward humans take, there are winners, and there are losers. But in the aggregate, the quality of life has trended upwards for most people.
Life for the masses, despite what you might think from spending time losing your mind on social media, has significantly improved for MOST people by almost every quantifiable measure.
The issue then is always one of variance and luck.
For most people, the future is great and gets better. But for those dealt a bad hand, technological developments can increase the efficiency of human suffering and could make their specific life infinitely worse.
However, I do think that blanket mocking, or ignoring those who call for caution with AI and speak openly about the cataclysmic risks is somewhat foolish and unhelpful.
If you were to map out future scenarios probabilistically, there is a possible future path that could result in human beings losing our status as leaders of our Earth.
While I don't believe it's a particularly high probability, global leaders and those working on AI should certainly consider it, and the field of debate should be open.
Great technological change presents great benefits but also great risks. We would be collectively moronic to just close two eyes and stick our fingers in our ears here because "yolo nvidia up only," look at this cool AI dog picture, haha GPT3.5 just said a funny joke.
But that's usually what we do as a society, right?
Or maybe this time it's different?
Those of you who have spent enough time in crypto will know what comes next.
So basically, you're telling me you have zero idea what's going to happen, and you're making me read a giant article about it?
Yes.
But while I can't tell you if AI will fast forward us to a utopia or to an early demise, I can tell you how I plan to deal with it and make some short-term predictions for what's on the horizon for us.
Some predictions:
Oil protestors who have spent the past few years gluing themselves to the floor will be replaced by AI protestors who will spend the next few years gluing themselves to the floor.
AI will create more jobs than it destroys.
AI will create great breakthroughs in science and healthcare.
There will be a bubble in AI stocks, and it's already started. We've already seen four-week-old companies raising $100m+ without a working product, and Nvidia is trading at 193 times its P/E ratio. However, we've not yet seen Tai Lopez launch his own LLM, so until then, the top's not in. (NFA DYOR I am dumb at stocks)
Despite popular belief, crypto and AI will overlap quite significantly. As we see an explosion of deepfakes and bots pretending to be humans online, AI hysteria will continue to grow.
It's helpful then that the most prominent solution to this, launched by the most prominent person in AI, is a crypto tool: Worldcoin.
As the Overton window shifts, helped by Apple and their eye-scanning goggles, people will be less afraid of eyeball scanners and more worried about the online robot baddies that need to be verified to prevent evil.
This is quite promising for crypto, and I'll touch more on this in the next post.
Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.
This feels largely true with the current AI models. I've seen calls from the prominent exaggerators and caps lock typers that AI is going to replace everyone's jobs in the next six months, but from my experience using these tools, it is NOWHERE near ready to do this.
I often ask GPT to proofread my blog posts and correct any grammar mistakes because I'm dumb and lazy. About 60% of the time, it does it great, better than my old English teacher who used to edit homework with the big red pen that made children cry. But 40% of the time, it randomly adds in words, removes anything with humour, and ignores the main task. It still takes me considerable time to reread everything it has given back to me and check its work.
This is fine for people like me who have about three readers and do this for fun. But to use it as a professional business tool in its current state is dangerous. It's not bulletproof or ready to replace a human copywriter right now.
I can't say Googles Bard has been much better either. I asked it a few days ago to put the current crypto prices into a table, showing me the current drawdown from all-time high to now. It told me Solana was $220 and Bitcoin was $50,000. Nice if it was true, but so wildly inaccurate.
I hear people like Jason on the All In Podcast saying that this tech makes those who code 10x better, and while I haven't got much experience coding, my experience using these AIs for basic tasks makes me skeptical.
I have no doubt that in the future, these tools will be infinitely better as the parabolic curve of improvement takes hold, but right now, expectations are significantly higher than the reality.
How I'm dealing with it:
So, we've learned from this post that I basically have no clue what's going to happen with AI other than some stuff good, some stuff bad, probs not immediately, but probs not that far away either.
I'm not in any way qualified to know if these AI machines will murder all the babies, but to be honest, neither is anyone else because we've never met one before.
We don't even know what sentience or consciousness is or even how the brain works. So trying to predict how some alien species we've somehow accidentally created is going to treat us is kinda not a productive use of my time.
So that's exactly how I plan to deal with it.
By avoiding scary-sounding headlines and spending more time playing with the technology, hoping that all the super cool use cases come true, but keeping a rational and evidence-based thought process that hopefully allows me to profit from the huge wave of attention this new innovation captures.
Because even if the robots do rise up and turn us all into batteries, nothing I can do will prevent that future, and to be honest, I feel incredibly fortunate to be born as a human in this time of mass abundance, rather than at any point in the last 10,000 years.
Even if it turns out to be our book's final chapter.