Get $100K to Rules for AI Systems
If you’ve been enamored with ChatGPT, here’s a chance to experiment some more as OpenAI is offering grants to fund such experiments
Want to contribute to the growing popularity and demand for generative AI? Here’s your chance to do so and get paid up to $100,000 as grants to fund experiments to democratize the future of artificial intelligence. OpenAI, the folks behind the ChatGPT innovation have made this offer to develop rules that AI systems should follow in the future, albeit within legal confines.
Of course, the irony is that the grant program was announced barely hours after OpenAI made calls for a global regulatory body for AI on the lines of nuclearization and threats that they would move out of the European Union, if they couldn’t accept the upcoming regulations that the EU is considering to bring on.
Does this mean that co-founders Sam Altman, Greg Brockman and Ilya Sutskever are ready to throw money on creating a regulatory framework for themselves while cocking a snook at the rest of the world’s attempt to create one? All based on the premise that the latter do not understand artificial intelligence and its impact on civilization?
Regulation is a must, but who sets the guardrails?
Well, that’s for time to tell. For now, OpenAI felt accepted that regulation is a must but the pace at which generative AI is moving forward means that law might always be a few steps behind the crime. Which is why the latest announcement of a million dollar grant (they’ve promised ten grants of $100K) is attempting to outsource this process – or decentralize it as they call it.
The project wants to fund individuals, teams and companies to develop proof of concepts for a democratic process that could potentially answer queries about guardrails for AI, which the company says is critical for spreading the concept across more geographies while ensuring that everything is done within the confines of the law.
In a blog post, OpenAI says these initial experiments aren’t meant to be binding for decisions but for exploration of relevant questions and create innovative democratic tools that can directly inform decisions in the future. “This grant represents a step to establish democratic processes for overseeing superintelligence,” said the post.
The question, though, is guardrails for whom?
The funds are coming from OpenAI’s not-for-profit arm and the company is aiming to set up a process what it calls a “broadly representative group of people” who exchange views, engage in deliberations and then come up with a transparent decision-making process – one that would help answer queries like “Under what conditions should AI systems condemn or criticize public figures, given different options across groups regarding those figures.”
The post goes on to add that the primary objective for the spends is to foster innovations that enhance democratic methods to govern AI behavior. “We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest,” it said, while indicating that the grant does not have any commercial interest attached to it.
Are lawmakers not intelligent enough to know AI?
Of course, seen in the light of what Sam Altman had to say about the proposed EU regulations, this appears a bit specious. And if one were to juxtapose this with Altman’s appearance before a US Senate Congressional Committee a week ago where he spoke of specific norms for AI regulation in order to minimize the effect on OpenAI, things look murkier, to say the least.
Of course, there could be a hidden sense of altruism beneath all of this and the trio of founders actually hope to create some guardrails, not for future use, but to provide more data points for countries and industries when these groups sit down to debate the future of AI and the need for a global set of rules around it – just as nuclear power has.
Maybe, we could just ask ChatGPT for the answers – in this case the questions that need to be asked while formulating laws to curtail its powers – both constructive and destructive. Or would that tantamount to making AI the judge, jury and executioner of itself?