Scan to download
BTC $75,087.70 -1.62%
ETH $2,311.55 -2.10%
BNB $619.95 -2.17%
XRP $1.42 -1.26%
SOL $84.69 -2.87%
TRX $0.3330 +1.60%
DOGE $0.0937 -3.04%
ADA $0.2452 -3.00%
BCH $440.17 -1.81%
LINK $9.13 -3.11%
HYPE $42.99 -3.55%
AAVE $92.73 -17.74%
SUI $0.9413 -3.63%
XLM $0.1676 -1.00%
ZEC $325.89 -0.61%
BTC $75,087.70 -1.62%
ETH $2,311.55 -2.10%
BNB $619.95 -2.17%
XRP $1.42 -1.26%
SOL $84.69 -2.87%
TRX $0.3330 +1.60%
DOGE $0.0937 -3.04%
ADA $0.2452 -3.00%
BCH $440.17 -1.81%
LINK $9.13 -3.11%
HYPE $42.99 -3.55%
AAVE $92.73 -17.74%
SUI $0.9413 -3.63%
XLM $0.1676 -1.00%
ZEC $325.89 -0.61%

a16z founding partner: Why will AI save the world?

Summary: AI will not destroy the world; in fact, it may save the world.
MARCANDREESSEN
2023-06-07 14:12:51
Collection
AI will not destroy the world; in fact, it may save the world.

Original Author: MARC ANDREESSEN

Compiled by: Deep Tide TechFlow

The era of AI has brought surprises and panic, but the good news is that AI will not destroy the world; instead, it may save the world.

MARC ANDREESSEN, founding partner of a16z, believes that AI provides an opportunity to enhance human intelligence, allowing us to achieve better outcomes across various fields. Everyone can have an AI mentor, assistant, or partner to help us maximize our potential. AI can also drive economic growth, scientific breakthroughs, and artistic creation, improve decision-making, and reduce wartime casualties. However, there are risks associated with the development of AI, and the current moral panic may exaggerate the issues, with some actors possibly acting in their own self-interest.

How should we rationally view AI, and from what perspectives should we consider it? This article provides us with a viable, credible, and in-depth discussion model.

Here is the full text:

The era of AI has arrived, bringing both surprises and much panic. Fortunately, I bring good news: AI will not destroy the world; in fact, it may save the world.

First, let’s briefly introduce what AI is: applying mathematics and software code to teach computers how to understand, synthesize, and generate knowledge, just as humans do. AI operates like any other computer program—running, accepting input, processing, and generating output. The output of AI is useful in many fields, from coding to medicine, law, creative arts, and more. It is owned and controlled by humans, just like any other technology.

AI is not a killer robot that will initiate and decide to murder humans or otherwise destroy everything, as you might see in movies. Rather, AI may become a better way to improve everything we care about.

Why can AI make everything we care about better?

Social sciences have conducted thousands of studies over the years, and the most reliable core conclusion is that human intelligence can improve life outcomes. Smart people achieve better results in almost all areas of activity: academic achievement, job performance, career status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision-making, understanding others' perspectives, creative arts, parenting outcomes, and life satisfaction.

Moreover, human intelligence has been the leverage we have used for millennia to create the world we live in today: science, technology, mathematics, physics, chemistry, medicine, energy, architecture, transportation, communication, arts, music, culture, philosophy, ethics, and morality. Without applying intelligence in all these areas, we would still be living in the muck, barely scraping by with basic agricultural subsistence. Instead, we have leveraged our intelligence to increase our standard of living by about 10,000 times over the past 4,000 years.

AI offers us an opportunity to enhance human intelligence, making all intellectual achievements—from creating new medicines to solving climate change to interstellar travel—better than ever before.

The enhancement of human intelligence by AI has already begun—AI has appeared around us in various forms, such as many types of computer control systems, and is now rapidly upgrading through large language models like ChatGPT, and will accelerate from here—if we allow it.

In our new era of AI:

  • Every child will have an AI mentor who is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI mentor will accompany each child throughout their growth, helping them maximize their potential with infinite love.
  • Everyone will have an AI assistant/coach/mentor/trainer/advisor/therapist who is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will accompany everyone through all of life's opportunities and challenges, maximizing each person's outcomes.
  • Every scientist will have an AI assistant/partner that greatly expands the scope of their scientific research and achievements. Every artist, engineer, businessman, doctor, and caregiver will also have the same AI assistant in their fields.
  • Every leader of the people—CEOs, government officials, nonprofit organization chairs, sports coaches, teachers—will also benefit. The amplification effect of leaders making better decisions for their constituents is enormous, making this intellectual enhancement critically important.
  • The productivity growth of the entire economy will significantly accelerate, driving economic growth, the creation of new industries, the creation of new jobs, and wage growth, leading to a new era of global material prosperity.
  • Scientific breakthroughs, new technologies, and the emergence of new drugs will expand significantly as AI helps us further decode the laws of nature.
  • Artistic creation will enter a golden age, with AI-enhanced artists, musicians, writers, and filmmakers able to realize their visions faster and on a larger scale than ever before.
  • I even believe that when war is unavoidable, AI will improve warfare by significantly reducing wartime casualties. Every war is characterized by terrible decisions made by constrained human leaders under extreme pressure and with extremely limited information. Now, military commanders and political leaders will have AI advisors to help them make better strategic and tactical decisions, minimizing risks, errors, and unnecessary bloodshed.

In short, anything that people do today with their natural intelligence can be done better with AI, from curing all diseases to achieving interstellar travel.

This is not just a matter of intelligence! Perhaps the most underrated quality of AI is how it humanizes. AI art gives those who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to a compassionate AI friend genuinely enhances their ability to cope with adversity. AI medical chatbots have already proven to be more compassionate than their human counterparts. AI, with its infinite patience and compassion, will make the world warmer and kinder, rather than harsher and more mechanized.

However, the risks here are high. AI may be the most important and best thing our civilization has created, comparable to electricity and microchips, and possibly even surpassing them.

The development and proliferation of AI—far from being a risk we should fear—is a moral obligation we owe to ourselves, our children, and our future. With AI, we should live in a better world.

So why the panic?

In stark contrast to this positive view, public discourse about AI is filled with fear and paranoia.

We hear various voices claiming that AI will kill us all, destroy our society, take away all jobs, and cause severe inequality. How do we explain such a vast difference between the nearly utopian and dystopian outcomes?

Historically, every significant new technology, from electric lights to automobiles, from radios to the internet, has triggered panic—a social contagion that leads people to believe that new technology will destroy the world, destroy society, or both. The excellent work of the pessimists' archives documents decades of these technology-driven moral panics; their history makes the patterns very clear. It turns out that this panic has occurred before.

Of course, many new technologies have indeed led to negative consequences—often those technologies that have been very beneficial to us in other ways. Therefore, the mere existence of a moral panic does not mean there are no issues worth paying attention to.

However, moral panic is inherently irrational—it exaggerates potentially reasonable concerns to a hysterical degree, causing genuinely serious issues to be overlooked.

Now we have an AI moral panic.

This moral panic has been used by various actors as a driving force for policy action—new AI restrictions, regulations, and laws. These actors have made extremely dramatic public statements about the dangers of AI—satisfying and further exacerbating the moral panic—all while portraying themselves as selfless defenders of the public good.

But are they?

Are they right or wrong?

Economists have observed a long-standing pattern in such reform movements. The actors within such movements can be divided into two categories—"Baptists" and "Bootleggers"—drawing from the example of Prohibition in the 1920s America:

"Baptists" are true-believer social reformers who emotionally, if not rationally, feel that new restrictions, regulations, and laws are needed to prevent social disaster.

For Prohibition, these actors were often devout Christians who believed that alcohol was destroying the moral fabric of society. For AI risks, these actors believe that AI may pose some existential risk—if given a lie detector, they genuinely think so.

"Bootleggers" are self-interested opportunists who can gain financially by implementing new restrictions, regulations, and laws that shield them from competition. For Prohibition, these individuals made fortunes selling illegal alcohol.

In the case of AI risks, these are CEOs who will make more money if regulatory barriers are established, as the government protects them from new startups and open-source competition.

Some may argue that there are individuals who appear to be "Baptists" while also being "Bootleggers"—especially those funded by universities, think tanks, activist organizations, and media outlets attacking AI. If you receive a salary or grant to cultivate AI panic… you are likely a "Bootlegger."

The problem with "Bootleggers" is that they win. "Baptists" are naive ideologues, while "Bootleggers" are cynical operators, so the outcome of such reform movements is often that "Bootleggers" get what they want—regulation, protection from competition—while "Baptists" are left wondering where their social improvement drive went wrong.

We have just witnessed a stunning example—the banking reforms after the 2008 global financial crisis. "Baptists" told us we needed new laws and regulations to break up "too big to fail" banks and prevent such crises from happening again. Thus, Congress passed the Dodd-Frank Act in 2010, which was touted as meeting the goals of the "Baptists," but was actually controlled by the "Bootleggers"—the large banks. The result is that the "too big to fail" banks are now even bigger than they were in 2008.

Thus, in practice, even if "Baptists" are sincere—even if "Baptists" are correct—they will be exploited by the cunning and greedy "Bootleggers" to profit.

Currently, we are seeing this same situation unfold in the push for AI regulation; merely identifying the actors and questioning their motives is not enough. We should evaluate the perspectives of "Baptists" and "Bootleggers."

AI Risk #1: Will AI Kill Us All?

The initial and most primal AI risk is that AI will decide to kill humans.

The fear that the technology we create will rise up and destroy us is deeply encoded in our culture. The Greeks expressed this fear through the myth of Prometheus—who brought the destructive power of fire to humanity, more broadly speaking, technology ("techne"), and was thus condemned to suffer eternally by the gods. Later, Mary Shelley gave us our modern version of this myth in her novel "Frankenstein," or "The Modern Prometheus," where we develop technology to achieve immortality, only for that technology to rise up and attempt to destroy us.

The evolutionary purpose of this myth is to motivate us to seriously consider the potential risks of new technologies—after all, fire can indeed be used to burn down entire cities. But just as fire is also the foundation of modern civilization, used to keep us warm and safe in a cold and hostile world, this myth overlooks the fact that the benefits of most (all?) new technologies far outweigh the downsides and, in practice, provoke destructive emotions rather than rational analysis. Just because prehistoric humans might have collapsed in this way does not mean we must do so; we can apply reason.

I believe the idea that AI will decide to directly kill humans is a profound category error. AI is not a being that has been sparked into the struggle for survival through billions of years of evolution, like animals and us. It is code built, owned, used, and controlled by people—computers. The notion that it will someday develop its own thinking and then decide it has the motivation to attempt to kill us is a superstitious stereotype.

In short, AI has no will, no goals, and does not want to kill you because it is not alive.

Now, it is clear that some people are convinced that AI will kill humans—"Baptists"—who have gained significant media attention for their dire warnings, some of whom claim to have studied the subject for decades and express that they are now terrified by what they know. Some of these true technological innovators even feel this way. These actors advocate for various strange and extreme restrictions on AI, from banning AI development to military airstrikes on data centers and nuclear war. They argue that since people like me cannot rule out catastrophic consequences of AI in the future, we must take a cautious stance to prevent potential existential risks.

My response is that their position is unscientific—what testable hypothesis exists? What can prove this hypothesis false? How do we know we are entering a dangerous territory? Most of these questions go unanswered, except for "you can't prove it won't happen!" In fact, the positions of these "Baptists" are very unscientific and extreme—conspiracy theories about mathematics and code—and have even called for physical violence, so I will do something I usually do not do and question their motives.

Specifically, I believe three things are happening:

First, recall that John von Neumann's response to Robert Oppenheimer regarding his role in creating nuclear weapons—which helped end World War II and prevent World War III—was: "Some people will confess to crimes for honor." How can one claim the importance of their work in the most striking way without appearing overly self-aggrandizing? This explains the mismatch between the rhetoric and actions of the "Baptists" who are actually building and funding AI—look at their actions, not their words.

Second, some "Baptists" are actually "Bootleggers." There is an entire professional field of "AI safety experts," "AI ethicists," and "AI risk researchers." They are hired to be doomsayers.

Third, in California, we have gained notoriety for thousands of cults, from EST to the People's Temple, from Heaven's Gate to the Manson Family. Many of these cults, though not all, are harmless and may even serve to provide a home for alienated groups. However, some are indeed very dangerous, and cults often find it difficult to cross the line that leads to violence and death.

The reality is that for everyone in the San Francisco Bay Area, it is evident that "AI risk" has evolved into a cult that has suddenly appeared in global media attention and public discourse. This cult attracts not only fringe figures but also some genuine industry experts and many wealthy individuals, including the recently infamous Sam Bankman-Fried.

It is precisely because of the existence of this cult that we have some very extreme AI doomsayers—this does not mean they actually possess secret knowledge that justifies their extremism; rather, they have worked themselves into a fervor and indeed are… very extreme.

This type of cult is not new—there has long been a millenarian tradition in the West that has produced apocalyptic cults. The AI risk cult exhibits all the characteristics of millenarian apocalyptic cults. To quote Wikipedia (with some additions from me):

"Millenarianism refers to a group or movement (AI risk doomsayers) that believes a fundamental transformation of society will occur (the arrival of AI), after which everything will change (AI utopia, dystopia, or apocalypse). Only dramatic events (banning AI, airstrikes on data centers, nuclear strikes on unregulated AI) can change the world (prevent AI), and this transformation is expected to be brought about or survived by a group of devout and focused individuals. In most millenarian cases, the impending disaster or battle (AI revelation or its prevention) will be followed by a new, purified world (AI ban), in which the believers will be rewarded (or at least proven right all along)."

This cult pattern is so obvious that I am surprised more people do not see it.

Make no mistake, cults sound interesting, their writings are often creative and captivating, and their members can be engaging at dinners and on television. However, their extreme beliefs should not dictate the future of law and society—this is clearly undesirable.

AI Risk #2: Will AI Destroy Our Society?

The second widely discussed AI risk is that AI will destroy our society because its outputs will be so "harmful," in the terms of these cultists, that even if we are not directly killed, we will suffer profound harm as humans.

In short: if robots do not kill us, misinformation will destroy our society.

This is a relatively new doomsayer concern that has branched out and, to some extent, taken over the "AI risk" movement I described above. In fact, the terminology of AI risk has recently shifted from "AI safety" to "AI alignment" (concerns about "dangerous" people using AI). The original AI safety advocates have been frustrated by this shift, although they do not know how to revert it—they now advocate renaming the actual AI risk topics to "AI not killing everyone-ism," although this has not been widely adopted, at least it is clear.

The AI social risk statement is characterized by its own term "AI alignment." Aligned with what? Human values. Whose human values? Oh, this is where things get tricky.

Coincidentally, I have witnessed a similar situation—the "trust and safety" wars of social media. As is now evident, social media services have been under immense pressure from governments and activists for years to ban, restrict, censor, and otherwise suppress various content. Similarly, concerns about "hate speech" (and its mathematical counterpart "algorithmic bias") and "misinformation" have been directly transferred from the social media realm to the new field of "AI alignment."

The main lesson I learned from the social media wars is:

On the one hand, there is no absolute position of free speech. First, every country, including the United States, will deem certain content illegal. Second, there are certain types of content, such as child pornography and incitement to violence, that are universally considered intolerable—regardless of legality. Therefore, any technological platform that promotes or generates content (speech) will have some limitations.

On the other hand, once a framework for limiting content is established—such as restrictions against "hate speech," specific harmful terms, or false information—a series of government agencies, activist pressure groups, and non-governmental entities will begin to accelerate their actions, demanding more and more censorship and suppression of speech they deem threatening to society and/or their personal preferences. They may even resort to overt criminal means. In the realm of social media, this cycle may seem endless and is enthusiastically supported by our elite power structures. This has persisted on social media for a decade, with only a few exceptions, and is becoming increasingly fervent.

Thus, this is the dynamic now forming around "AI alignment." Its proponents claim to have the wisdom to design AI-generated speech and ideas that are beneficial to society and to ban those that are harmful. Opponents argue that the thought police are extremely arrogant and self-righteous—at least in the United States, often brazenly criminal.

Since the advocates of "trust and safety" and "AI alignment" are clustered within a very narrow elite class on the coasts of America, many of whom work in technology and writing, I will not attempt to persuade you to abandon this notion; I merely want to illustrate the nature of the demand, and that most people in the world neither agree with your ideology nor wish to see you win.

The dominance imposed by factional moral standards on social media and AI is strengthening, and if you disagree with these moral standards, you should also be aware that the struggle over what AI is allowed to say/generate will be even more significant than the struggle over social media censorship. AI is likely to become the controlling layer of everything globally.

In short, do not let the thought police suppress AI.

AI Risk #3: Will AI Take All Our Jobs?

The fear of losing jobs due to mechanization, automation, computerization, or AI has been a panic that has existed for centuries, ever since mechanical devices like the mechanical loom were introduced. Although every major technological revolution in history has brought more high-paying jobs, each time this wave of panic has been accompanied by the claim that "this time is different"—this time it will finally happen, this technology will deliver a fatal blow to human cheap labor. However, this has never occurred.

In recent history, we have experienced two cycles of technology-driven unemployment panic—the outsourcing panic of the 2000s and the automation panic of the 2010s. Despite many commentators, experts, and even tech industry executives banging the table throughout both decades, claiming that mass unemployment was imminent, by the end of 2019—just before the COVID pandemic broke out—there were more jobs in the world than ever before in history, and wages were higher.

Yet, this erroneous notion does not disappear.

And now, it is back.

This time, we finally have a technology that will take away all jobs and render human labor irrelevant—AI. This time, history will not repeat itself; AI will lead to mass unemployment—rather than rapid growth in the economy, employment, and wages—right?

No, that will not happen—in fact, if AI is allowed to develop and proliferate throughout the economy, it could trigger the most significant and sustained economic prosperity in history, accompanied by record job and wage growth. Here’s why.

The speakers claiming that automation leads to unemployment have consistently made a core error known as the "labor fallacy." This fallacy is the mistaken belief that there is a fixed amount of labor in any given economy at any specific moment that must be completed, either by machines or by humans—if machines do it, there will be no work left for humans.

The labor quantity fallacy arises from a naive intuition, but that intuition is incorrect. When technology is applied to production, we gain productivity growth—an increase in output with a reduction in inputs. The result is lower prices for goods and services. As the prices of goods and services decrease, we spend less on them, meaning we now have additional purchasing power to buy other items. This increases demand in the economy, driving the creation of new production—including new products and new industries—thereby creating new jobs for those previously displaced by machines. The result is a larger economy, greater material prosperity, and more industries, products, and job opportunities.

But the good news does not stop there. We will also see higher wages. This is because, at the individual worker level, the market will set compensation based on the marginal productivity function of workers. Workers injected with technology into businesses will be more productive than those in traditional firms. Employers will either pay that worker more because they are now more productive, or another employer will do so for their own self-interest. The result is that, typically, introducing technology into an industry not only increases the number of jobs in that industry but also raises wages.

In summary, technology makes people more productive. This leads to lower prices for existing goods and services and higher wages. This, in turn, promotes economic growth and job growth while incentivizing the creation of new jobs and new industries. If the market economy operates normally, and if technology is freely introduced, this is an endless upward cycle. Because, as Milton Friedman observed, "human desires and needs are infinite." A market economy infused with technology is our way of getting closer to achieving everything everyone can imagine, but never fully realizing it. That is why technology will not destroy jobs.

These ideas may be so shocking to those who have not been exposed to them that you might need some time to understand them. But I assure you I am not making them up—in fact, you can read all about them in standard economics textbooks.

But this time is different, you might think. Because this time, with AI, we have a technology that can replace all human labor.

However, using the principles I described above, imagine what it would mean if all existing human labor were replaced by machines.

It would mean that the rate of economic productivity growth would take off at an absolutely astronomical speed, far beyond any historical precedent. The prices of existing goods and services would drop to near zero across the board. Consumer welfare would skyrocket. Consumer spending power would soar. New demand in the economy would grow explosively. Entrepreneurs would create dazzling new industries, products, and services, hiring as many people and AI as possible to meet all the new demand as quickly as possible.

What if AI again replaced these labor forces? The cycle would repeat, driving consumer welfare, economic growth, and higher employment and wage growth. It would be a linear upward spiral, reaching a material utopia that Adam Smith or Karl Marx never dared to imagine.

We should feel fortunate.

AI Risk #4: Will AI Lead to Severe Inequality?

Concerns about AI taking jobs transition directly to the next AI risk, assuming that AI does indeed take all jobs. Wouldn't that lead to massive and severe wealth inequality? The owners of AI would reap all the economic rewards while ordinary people would have nothing.

The flaw in this theory is that, as the owners of technology, it is not in your interest to keep it to yourself—instead, it is in your interest to sell it to as many customers as possible. The largest market for any product globally is the entire world, with 8 billion people. Therefore, in reality, every new technology—even if initially sold only to large companies or wealthy consumers—will quickly spread until it falls into the hands of the largest possible mass market, ultimately reaching everyone globally.

A classic example is Elon Musk's so-called "secret plan" publicly released in 2006:

  • Step one, make a [expensive] sports car.
  • Step two, use that money to make a more affordable car.
  • Step three, use that money to make even more affordable cars.

…and of course, that is exactly what he is doing now, which is why he has become the richest person in the world.

The key is in that last point. If Musk only sold cars to the rich today, would he become richer? No. If he only made cars for himself, would he be richer than he is now? Certainly not. He maximizes his profits by selling cars to the largest market in the world—the entire world.

In short, everyone can access such products—just as we have seen in the past with not only cars but also electricity, radios, computers, the internet, smartphones, and search engines. The incentives for these technology manufacturers are very strong; they will push prices down as much as possible to make them affordable for everyone globally. This is precisely what is already happening in the AI space—this is why you can now use cutting-edge generative AI, and even access Microsoft Bing and Google Bard for free—this is also what will continue to happen in the future. Not because these providers are generous, but precisely because they are greedy—they want to expand their market size to maximize profits.

Thus, the opposite occurs: technology does not drive the centralization of wealth; rather, it empowers individual customers of technology—ultimately including everyone globally—with more power and captures most of the value generated. Just like previous technologies, companies building AI—assuming they must operate in a free market—will compete fiercely to make this happen.

This does not mean that inequality is not an issue in our society. It does exist; it just is not driven by technology. Rather, it is driven by sectors in the economy that resist new technologies the most and that have the most government intervention to block the adoption of new technologies (especially housing, education, and healthcare). The real risk of AI and inequality is not that AI will lead to more inequality, but that we do not allow the use of AI to reduce inequality.

AI Risk #5: Will AI Enable Bad Actors to Do Bad Things?

So far, I have explained four of the most commonly raised AI risks; now let’s discuss the fifth issue, which I genuinely agree with: AI will make it easier for bad actors to do bad things.

In a sense, technology is a tool. Starting with fire and stones, tools can be used for good—cooking food and building houses—or for bad—burning people and beating people. Any technology can be used for good or bad. That is true. And AI will undoubtedly make it easier for criminals, terrorists, and hostile governments to do bad things.

This leads some to propose banning AI in such cases. Unfortunately, AI is not some mysterious substance that is difficult to obtain. Rather, it is the easiest material to access in the world—mathematics and code.

Clearly, AI is already widely applied. You can learn how to build AI from thousands of free online courses, books, papers, and videos, and excellent open-source implementations are spreading daily. AI is like air—it will be everywhere. Stopping this would require such draconian oppressive measures—a world government monitoring and controlling all computers? Armed police in black helicopters capturing GPUs that do not obey orders?

Therefore, we have two very simple ways to address the risk of bad actors using AI to do bad things, which is precisely what we should focus on.

First, we have laws in place that make it a crime for almost anyone to use AI to do bad things. Hacking the Pentagon? That is a crime. Stealing money from banks? That is also a crime. Manufacturing biological weapons? That is a crime too. Carrying out terrorist attacks? That is also a crime. We can simply focus on prevention when we can prevent these crimes and prosecute when we cannot. We do not even need new laws—I do not know of any actual bad uses of AI that have been proposed that are not already illegal. If new bad uses are discovered, we will ban those uses.

But you will notice what I have left out there—I said we should focus first on preventing AI from enabling them—doesn’t that prevention imply banning AI? Well, there is another way to prevent such behavior, which is to use AI as a defensive tool. Make AI powerful in the hands of good people, especially those good people who prevent bad things from happening.

For example, if you are concerned about AI generating fake people and fake videos, the answer is to build new systems that allow people to verify themselves and real content through cryptographic signatures. Before AI, digital creation and modification of real and fake content already existed; the answer to the problem is not to ban word processors, Photoshop, or AI, but to leverage technology to build systems that genuinely solve the problem.

Thus, the second point is to take significant steps to use AI for good, legal, and defensive purposes. Let’s apply AI to cybersecurity, biological defense, combating terrorism, and all the other things we do to protect ourselves, our communities, and our national security.

Of course, many smart people in the world are already doing this—but if we applied all the current efforts and wisdom obsessed with banning AI to leveraging AI to prevent bad actors from doing bad things, I have no doubt that a world with AI would be far safer than the world we live in today.

What actions should be taken?

I propose a simple plan:

  • Large AI companies should be allowed to build AI as quickly and aggressively as possible, but they should not be allowed to achieve regulatory capture, nor should they be isolated from market competition due to erroneous AI risk statements. This will maximize the technological and social returns of these companies' astonishing capabilities, which are the jewels of modern capitalism.
  • Startup AI companies should be allowed to build AI as quickly and aggressively as possible. They should neither face government protection obtained by large companies nor receive government assistance. They should be allowed to compete freely. If startups do not succeed, their presence in the market will continuously incentivize large companies to perform their best—regardless, our economy and society will benefit from it.
  • Open-source AI should be allowed to spread freely and compete with large AI companies and startups. Open-source should have no regulatory barriers. Even if open-source does not defeat companies, its widespread availability will be a boon for students worldwide who want to learn how to build and use AI to be part of the technological future, ensuring that AI is available for anyone who can benefit from it, regardless of who they are or how much money they have.
  • To offset the risk of bad actors using AI to do bad things, the government should actively collaborate with the private sector in every potential risk area to leverage AI to maximize society's defensive capabilities. This should not be limited to the risks enabled by AI but should address more general issues such as malnutrition, disease, and climate. AI can be an extremely powerful tool for problem-solving, and we should view it as such.

This is how we can use AI to save the world.

I conclude this article with two simple statements.

The development of AI began in the 1940s, coinciding with the invention of computers. The first scientific paper on neural networks—the architecture of AI we have today—was published in 1943. Over the past 80 years, an entire generation of AI scientists has been born, educated, worked, and, in many cases, passed away without seeing the returns we are now reaping. Each of them is legendary.

Today, more and more engineers—many of whom are young and may have grandparents or even great-grandparents involved in the ideas behind AI—are working hard to make AI a reality. They are all heroes, every single one of them. My company and I are thrilled to support them as much as possible, and we will 100% support them and their work.

warnning Risk warning
app_icon
ChainCatcher Building the Web3 world with innovations.