The Future Fund, a philanthropic collective funded primarily by the creator of a crypto-currency exchange and aimed at supporting “ambitious projects to improve humanity’s long-term prospects,” has launched a contest offering substantial prizes for arguments that change their minds about the development and effects of artificial intelligence.
The use of prizes to incentivize philosophical work on specific topics is not new, and varieties of them are regularly offered by philosophical organizations and academic journals (for example) and foundations (for example). Prizes for philosophical work aimed at changing minds and behavior have been offered before, too (for example).
The “Future Fund’s AI Worldview” contest is a bit different, though. One difference is that its prizes are bigger: up to $1,500,000.
Another difference is the condition for winning several of the prizes: moving the judges’ credences regarding a few predictions about artificial general intelligence (AGI)—and the more you move them, the bigger the prize you could win.
For example, the judges currently put their confidence in the claim that AGI will be developed by January 1, 2043 at 20%. But if you can devise an argument that convinces them that their credence in this should be between 3% and 10%, or between 45% and 75%, then they will award you $500,000. If you convince them it should be below 3% or above 75%, they will award you $1,500,000. A similar prize structure is offered in regard to other propositions, such as, “Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI,” their confidence in which they currently place at 15%. There are other prizes, too, which you can read about on the prize page.
Why are they running this contest? They write:
We hope to expose our assumptions about the future of AI to intense external scrutiny and improve them. We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century, and it is consequently one of our top funding priorities. Yet our philanthropic interest in AI is fundamentally dependent on a number of very difficult judgment calls, which we think have been inadequately scrutinized by others.
As a result, we think it’s really possible that:
all of this AI stuff is a misguided sideshow,
we should be even more focused on AI, or
a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem.
If any of those three options is right—and we strongly suspect at least one of them is—we want to learn about it as quickly as possible because it would change how we allocate hundreds of millions of dollars (or more) and help us better serve our mission of improving humanity’s longterm prospects…
AI is already posing serious challenges: transparency, interpretability, algorithmic bias, and robustness, to name just a few. Before too long, advanced AI could automate the process of scientific and technological discovery, leading to economic growth rates well over 10% per year. As a result, our world could soon look radically different. With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease. But two formidable new problems for humanity could also arise:
Loss of control to AI systems
Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.
Concentration of power
Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future…
We really want to get closer to the truth on these issues quickly. Better answers to these questions could prevent us from wasting hundreds of millions of dollars (or more) and years of effort on our part. We could start with smaller prizes, but we’re interested in running bold and decisive tests of prizes as a philanthropic mechanism. A further consideration is that sometimes people argue that all of this futurist speculation about AI is really dumb, and that its errors could be readily explained by experts who can’t be bothered to seriously engage with these questions. These prizes will hopefully test whether this theory is true.
I was told via email, “We think philosophers would be particularly well-equipped to provide in-depth analyses and critiques about these assumptions concerning the future of AI, so we wanted to disseminate this opportunity to the broader philosophy community.”
The model of this contest could be applied to other topics, of course. Which would you suggest?
Originally appeared on Daily Nous Read More